Focus on DevAIOps in 2026
DevAIOps is a concept advocated by DASA (DevOps Agile Skills Association). It is a methodology that integrates AI and LLM with DevOps principles to further automate and optimize software development and operations processes.
According to DASA, DevAIOps has five principles:
- Building literacy and knowledge sharing for responsible adoption
- Importance of governance, compliance, and ethical use
- Improvement through experimentation and iteration
- Strategic utilization of data as assets
- Optimization of value and cost
In 2025, AI coding became popular and caused significant disruption in the engineering community. AI coding agents wrote reasonably good code (at first), but they frustrated us with sudden breakdowns and coding that deviated from established rules. Personally, I’ve tried to control them using SpecKit, coding guidelines, and small PRs, but I’m still occasionally troubled by them.
That said, it’s difficult to go back to an era without AI coding agents. Even if companies ban them completely, I feel like we’ll only end up in a world where shadow AI proliferates, with individuals using AI from their personal smartphones.
About 2026
This trend is expected to expand into other areas in 2026:
- Testing
- Documentation
- CI
- Monitoring
These technologies are likely to see further automation through AI.
- Automatic issue creation with error details when errors occur
- Generating system fix code based on issues and creating PRs
- Reviewing PRs and automatically generating missing tests
- Automatically updating and translating documentation based on current system content
It seems like automation up to this point is achievable.
Compatibility with DevOps
DevOps divides development and operations into the following eight areas. Let me also summarize AI/LLM initiatives in each area (there may be gaps):
| Area | AI |
|---|---|
| PLAN | Requirements brainstorming, review |
| CODE | SDD, AI coding agents, AI code review |
| BUILD | |
| TEST | Test code generation, flexible test execution, root cause analysis for errors |
| RELEASE | Release note generation, documentation updates |
| DEPLOY | |
| OPERATE | Anomaly detection |
| MONITOR | Alert noise reduction |
Currently, there seems to be a gradation across areas. In 2026, I think powerful services and software will emerge for areas that are currently lacking.
About the Division of Roles Between AI and Humans
The integration of AI into DevOps cannot be stopped. Trying to hold it back may be similar to resistance against cloud in the early 2000s or resistance to smartphones and tablets in the 2010s. The times have changed, so there’s no point in resisting.
For individuals, there’s no choice but to “ride this big wave.” The coding area has already undergone disruptive changes by AI in 2025, so we just need to actively tackle other areas as well.
So, after actively incorporating AI in this way, what should humans do? What humans can do is “communication” and “responsibility.”
Communication
AI produces what seems like best practices, but it’s only seemingly so. It’s just producing seemingly good things based on vast amounts of past content. In other words, it’s a grand version of reinventing the wheel.
Conversely, when creating something unprecedented, many failures will accompany it. Also, since many LLMs output in a direction that humans judge to be good, they tend to be affirmative even with incorrect opinions. This can lead to major mistakes in business or personal matters. Therefore, if you use AI as a consultant substitute, while the reporting may be good, you need to be careful as it may mislead in terms of business value afterward.
In this sense, what is required of humans is the ability to find issues and solutions through human-to-human communication. We are not developing services for AI (probably), so we need people and companies to think well of us. Such aspects involve fairly emotional elements, so mechanically pushing “It should be good!” won’t get through.
Responsibility
AI cannot take responsibility. LLM services like ChatGPT always state “answers are not necessarily correct,” and this won’t change. In other words, humans must always take responsibility for what AI outputs. Engineers should never respond with “because AI outputted it.”
However, we can make AI output easier for humans to see and verify. CodeRabbit, which I’m involved with, summarizes summaries, sequence diagrams, and change points for code reviews to “reduce the cognitive load when humans review.” Such paving work for humans can be done by AI. One example is summarizing key points when errors occur or suggesting solutions.
Humans take responsibility, but making it easier to take responsibility is also expected to become AI’s role.
Conclusion
Personally, I believe the DevAIOps trend will come in 2026. While 2025 was mainly about the coding area, AI will be further utilized in other areas as well. In particular, I think there will be more players in the testing, CI, operations, and monitoring areas. In some cases, there may be a future where existing players are completely destroyed.
For individual engineers, while there are parts to be anxious about, there are also opportunities. It could lead to opportunities to create new tools and spread them globally. Translation is sufficient with AI, and there are plenty of OSS + SaaS business models. I think big waves will continue to occur in 2026.