ChatGPT and other generative AI tools are driving the trend of using AI in software development. Some experts predict that generative AI could eventually replace 40% of knowledge workers.
It’s understandable that enterprise software companies and their investors want to improve productivity and reduce labor time & costs, especially when development talent is scarce. But ChatGPT and open-source AI present risks for software development teams.
Open-source AI needs careful handling. If not, your company can suffer IP loss, reduced product quality, and AI might simply amplify common mistakes of traditional software development. Don’t assume that AI is going to immediately transform your team’s productivity. Instead, CEOs of software companies need to think carefully and proceed with caution. In the battle of AI vs. human intelligence, the right team of human talent using the right technology will accelerate software development beyond what can be produced by relying on AI alone.
Before you double down on the latest flavor of the month in software development AI, here are three big cautionary insights that software CEOs need to consider.
Generative AI for Software Businesses Poses Big Risks
ChatGPT and other open-source AI tools for software development bring a few big risks. First is the risk of intellectual property (IP) loss. Did you know that ChatGPT can essentially take your code and reuse it? Beware of the Service Level Agreement (SLA) and Terms of Use.
If you copy and paste your code into ChatGPT, they now have a license to do whatever they want with it. When the competition uses ChatGPT, they might see parts of your code that ChatGPT regurgitated back up. Now, you have to fight IP leakage.
Here’s another risk to look for when using open-source AI in software development: more mistakes and inconsistent results. Imagine an AI trained on Stack Overflow, a public community where developers post bits of code problems, and any number of other developers chime in with their suggestions or solutions. No amount of Upvoting or Downvoting is going to identify solutions that will work effectively in the specific context of challenges you might be facing. That won’t stop tools like ChatGPT from happily generating code for you. But the code is only as good as the (typically junior) developers who have time to contribute to such forums.
Utilization of Stack Overflow has dropped by 35% in the past few months because people are going to ChatGPT instead. But software teams need to be cautious when deciding how to apply these technologies to software automation. People have a misperception that AIs are more likely to be right.
But that’s not true – it all depends on the quality of the AI’s training.
Again: AI and automation tools are amplifiers. And they can amplify bad information just as strongly as the good stuff. You can’t trust everything that random internet users have posted.
In traditional software development, you might have a problem, you could search online, and copy and paste it in. But when you fold in ChatGPT or similar AI models that were trained only on publicly available content, they’re going to amplify mistakes in ways that will diminish your confidence in the results.
AI in Software Development Won’t Replace Top Talent
The promise of generative AI for software development is mostly focused on reducing labor time and costs. This promised feature feels like a bug to many software developers, with the idea that “40% of knowledge workers could be replaced by AI” firmly in their minds.
I believe that this hype is overblown. In the last few months, the accuracy of ChatGPT responses has fallen through the floor. AI in software development is only as good as its training and the people around it who manage, curate, and guide it.
The right way to think about AI is that it’s just another automation tool, and it’s an accelerant. AI is an amplifier. Just like an audio amplifier increases the amplitude of the signal you put in, so does automation and AI. Like the old saying goes: “garbage in, garbage out.”
Modularis has been in the generative AI space for 23 years. Our generative, low-code platform is designed to reduce the labor necessary to build commercial SaaS products and platforms.
My prediction is that generative AI technologies like ChatGPT, as they evolve, will probably reduce knowledge worker labor by about 15%, but mainly in commodity-oriented content creation jobs. There will be some attrition of knowledge workers, but it won’t be anywhere near 40%. For software development, there is too much complexity and too many creative tasks that humans simply do better.
Look at manufacturing and automation. People used to worry that robots would replace them on the factory assembly line. But Tesla recently learned a hard lesson: robots can’t do everything. Elon Musk had to scrap hundreds of millions of dollars worth of robotics and automation hardware because humans just did a better job at those tasks. In 2018, Musk said that Tesla’s excessive factory automation had been a mistake, and that “humans are underrated.”
The future of software development will be a blend of human intelligence and creativity augmented by pattern-driven generators (robots), and some level of generative AI. But the robots will not replace the people.
For example, if building software is like building a deck, your developers will be spending a lot of time driving screws into wood by hand. Now imagine you give them power screwdrivers. The tools let people drive those screws more effectively, faster, and most consistently. But people still have to decide WHERE to drive those screws, and in which sequence, to get a high quality deck built right, built fast, and built to last.
This is the right analogy for effective use of generative AI in software development. Sure, your developers need to be trained on how to use the new tool, and will have to level-up their skills, but the tool is not going to replace the people, and the people must still be held accountable for the final result.
The Right Way to Use AI in Software Development: Carefully
At Modularis, we’re making investments in large language models (LLMs) for software automation. But we are proceeding with caution. We will train the LLMs only on the best practices that are baked into our tech stack, built on our knowledge base and experience acquired over the past 23 years.
Want to get true value out of generative AI in the software engineering space? Use platforms that are truly trustworthy, white-box, and well understood. At Modularis, that’s been our focus for the past 20-plus years. Our open and battle-tested architecture, model-driven automation, and generative, low-code platform are vehicles to deliver our collective knowledge and expertise to help software companies build enterprise products and single-source, multi-tenant cloud platforms with minimum risk and maximum return.
Our benchmark for AI in software development is that if you look at the product code, you’ll be unable to tell that it was generated by AI. It will look like it was built by experienced software engineers and architects with unlimited time and energy. Don’t compromise on code quality, consistency, cleanliness, or maintainability. If you compromise on these qualities, you lose flexibility, scalability, and even stability.
We are optimistic about AI, but we’re approaching it cautiously. Ultimately, it will help us enhance the value we deliver with our low-code platform. But I’m skeptical about other companies that rely too heavily on ChatGPT and open- source AI, without paying attention to training and best practices.
There’s a famous quote from Forrest Gump: “Life was like a box of chocolates. You never know what you’re going to get.” The same applies to ChatGPT and AI in software development: the average quality of code for these products is still too unpredictable and inconsistent – it’s non-deterministic.
With today’s generative AI, if you ask it a question, you’ll get a different response every time as it learns and changes. For some software development tasks, that might be good enough. But it’s not good enough for laying down a foundation for commercial software platforms – you need something that’s deterministic. On the Modularis platform, once you design your model and you hit generate, you know exactly what you get. The same patterns are implemented consistently and at scale.
Inconsistent, learn-as-it-goes AI will amplify the worst aspects of traditional software development. If your approach to AI is simply, “Let’s reduce labor costs on software development with AI” – you might just be making more spaghetti code faster. And you’ll need to untangle that mess eventually, or you just might run out of money trying.
Software CEOs: Proceed With Caution on AI
CEOs want lower costs, lower headcount, and faster time to market. That’s what drives EBITDA. So when CEOs read about ChatGPT or see videos on LinkedIn about what it can do, they turn to their CTO and say, “Hey, let’s look into this.”
But too many CEOs are not concerned enough about the possible downsides and risks. If you are giving a directive to your CTO to implement AI in software development without using the right approach, your company is going to have problems.
The CTO might say, “Let’s go all in.” CTOs should know better, but they might have already copied and pasted their code into ChatGPT. Instead of going all in: tap the brakes. You don’t want to lose your IP or end up with mistake-riddled code. Be careful and manage your expectations.
As a CEO, you might think, “I can reduce labor costs by 40% by using this technology.” But that expectation will not be met. And the risks you can get into without realizing it are significant. Success with AI in software development depends on the mechanisms you use to approach it. You need a mastery of building enterprise software products, and that skill set is in short supply.
I expect a lot of companies to get into trouble with generative AI, with more software product failures and project failures. We need less hype and more caution.
Using AI? Think Critically, Like an Engineer
Years ago, when the scientific calculator first came out, in college courses in math, physics, or operations research, the best professors made sure that you solved problems analytically. Even if you had to write a proof over many pages, or use an equation with variables instead of numbers. It was important to understand the concepts first, show your work, and then use the calculator.
Lots of people made the mistake of assuming that, if you just jumped straight to using the calculator, the number must be right. But the number from the calculator wasn’t always right, because you didn’t understand the underlying problem.
Today, with too many people willing to just accept the quick solution of generative open-source AI, there is a big risk that software product design will suffer from this same loss of critical thinking. Instead of looking for quick answers to your software automation challenges, focus on the fundamentals of software product design and the first principles of software engineering.