Adiscon has been developing robust, high-performance logging and data processing software for decades. As the long-term stewards of rsyslog, we focus on reliability, transparency, and engineering quality. Our work powers systems in enterprises, government organizations, and operational environments around the world.

Our AI strategy builds directly on these principles.
AI Where It Improves Real Outcomes
We integrate AI in ways that strengthen existing workflows and deliver measurable value. This includes:
- assisting development and documentation work
- improving diagnostics, configuration guidance, and troubleshooting
- accelerating support while maintaining human oversight
- reducing operational friction in large or complex environments
- helping teams work more efficiently with both Linux and Windows logging stacks
The goal is always the same: use AI to make systems easier to operate, not harder.
AI Agents for rsyslog and Windows Logging
A key part of this work is our family of AI-powered assistants. The first agent focuses on rsyslog and helps users explore configuration options, understand modules, and navigate the ecosystem more effectively.
We are now expanding this approach to WinSyslog and our other Windows-based logging solutions, providing consistent, intelligent assistance across platforms. These agents support both open-source users and commercial customers, always with human validation and transparent decision paths.
Respect for Open Source and Its Community
Adiscon’s AI efforts are designed to complement, not replace, the way rsyslog is developed within the open-source community.
- We do not alter the community decision-making structure.
- Human maintainers remain responsible for architecture and commits.
- AI systems are tools, not decision makers.
Our goal is to support contributors, reduce friction, and make it easier to engage with the project.
On-Premises AI for Security-Focused Organizations
Many of our customers operate in environments where cloud-based AI is not an option. For these cases, we are evaluating on-premises AI systems based on Ollama-powered local LLM runtimes.
These systems enable:
- modern open-model AI fully inside a customer’s infrastructure
- secure handling of sensitive configuration or operational data
- predictable, auditable behavior
- integration with existing monitoring and logging workflows
In a later step, we plan to extend this into on-premises AI pipeline components that complement our established tooling.
A Practical, Responsible AI Roadmap
Our approach is incremental and engineering-driven. We avoid hype and focus on solutions that measurably improve the work of our customers, partners, and community members.
We will continue to share updates as our AI initiatives evolve. If you are interested in AI-assisted infrastructure, customized AI agents, or on-premises deployments, we are available for collaboration and consulting.