April 20, 2025 by Bamiyan Gobets
“The pace of innovation in AI is no longer something we observe from the sidelines, it’s now a defining factor in how we build, ship, and scale software.” – Bam Gobets
Here’s the Lowdown
AI-enabled development and outcomes are a game-changer. But it’s also a stress test for leadership, strategy, and company culture.
Below is a framework I’ve been sharing with other business leaders to help navigate what’s ahead, and to make the right choices when it comes to infrastructure, talent, and long-term AI strategy.
Short-Term (Now–2026): Speed, Cost, and Experimentation
AI-powered tools like GitHub Copilot, ChatGPT, and Claude are already accelerating developer productivity. With Nebul’s customers, I’m seeing a 30–50% gain in prototyping speed, faster bug resolution, and leaner QA cycles.
But here’s where a critical decision enters: Do you plug into public AI SaaS and Cloud like Microsoft Azure, CoPilot and Curser; or do you pursue a more (data & IP) protected route using private infrastructure, private cloud and open-source models?
“Consider building AI capability on your own infrastructure, private cloud or dedicated instances, and utilizing only open-source LLMs to protect your data and IP.” – credit CoPilot
Business Leaders Should:
- Invest in AI fluency: Make sure your engineering leads are comfortable using AI assistants and understanding their limitations. This is not just a technical shift, it’s cultural.
- Pilot AI in workflows: Introduce co-pilot tools or automation in code review, documentation, or QA. But consider wherethis happens. Tools like Azure’s Codex or GitHub Copilot utilized shared LLM backends, thus exposed to 3rd parties and by foreign government entities by proxy.
“Decide now what risks you’re willing to tolerate before you get hooked on the specific tools.” – credit Perplexity
Evaluate your AI Infrastructure Options
1. Public SaaS (e.g., GitHub Copilot, Azure, Curser)
✅ Easy to integrate, rapid prototyping, no infrastructure decisions needed
❌ 3rd party LLMs, IP potentially exposed, data exposed to foreign data requests
❌ 3rd party Vendor data/app lock-in
2. Sovereign AI/Private Cloud and open-source LLMs
✅ EU-based, GDPR-aligned, Immune from 3rd Parties and US Cloud Act & US FISA Act data collection.
✅ Customize and fine-tune on private code and utilize RAG to connect multi-million line code-bases for reference.
❌ Requires taking steps to deploy or utilize internal infrastructure or considering Private AI Cloud providers.
↳ This approach takes forethought, however it’s a high-value pursuit.
“If your IP is sensitive or you’re in a regulated sector, start experimenting on a sovereign stack early. Look into self-hosting with open-source models or work with dedicated private cloud stacks with European sovereign AI providers, like nebul.com” – credit Nebul PrivateGPT
Medium-Term (2026–2029): Redefining Team Structure and Value Creation
As AI continues to mature, software teams will change. Routine code writing will increasingly be handled by AI. What will matter is how well your teams can define, guide, and validate AI-generated systems.
Your leadership focus should shift:
- Redesign teams: Upskill developers to become AI product owners, validation engineers, and prompt/system designers.
- Strategically own your IP and development stack: If you’re still on public SaaS tools by 2026 without a sovereign option, you’re likely behind, with competitors pulling away, running fully customized and RAG connected code assistants.
- Productize your strengths: Internal AI automation that saves your team time can often become external offerings. Look for opportunities to create AI-powered developer tools, APIs, or microservices tailored to your domain.
💡Strategic advantage comes not only from writing more code or doing it faster, but from shipping better value, faster, and also maintaining control your intellectual property.
“Avoid sharing your regulated data, intellectual property and any innovations with SaaS or proprietary AI models, as they may be absorbing your inputs” – credit ChatGPT 4
Long-Term (2030+): From Software Builders to System Orchestrators
Autonomous agents will increasingly manage the software lifecycle: resolving tickets, writing features, even securing codebases.
But humans won’t disappear, they’ll rise to roles like:
- System orchestrators (setting goals for AI to execute)
- Ethical AI auditors (ensuring systems align with human values and law)
- Experience designers (making tech usable, human-centered, inclusive)
At this stage, your data, governance, and system design choices today will determine whether you’re leading or lagging.
Future-proof with infrastructure in mind:
- Proprietary or internal data + sovereign infrastructure = strategic moat.
- If you’re hosting your AI workflows on U.S.-controlled SaaS, be prepared for export risk, control issues, and potential audits.
- Companies that own their models, infrastructure, and datasets will dominate niches.
Final Advice for Business Leaders
If you’re running a software company in Europe, here’s your playbook:
In the next 12–18 months:
- Start small with AI pilots, on any platform, but don’t use sensitive data
- Decide on your infrastructure stack: Public SaaS or Sovereign AI?
- Establish internal governance policies around code, IP, and AI usage
In the next 3–5 years:
- Re-skill your teams for AI design and orchestration
- Build on EU-hosted infrastructure if compliance and control matter
- Turn internal AI efficiency into external product advantage
Long term:
- Build your moat: unique data, proprietary processes, strong ethical design
- Move from product builder to ecosystem leade