I have had a ham radio license since the late 1960s and have observed firsthand the transition from vacuum tubes (remember those?) to transistors. Because we can operate high power transmitters (1500 watt output), vacuum tubes in our world can last much longer than elsewhere. There is a reason for this. Vacuum tubes are high-power devices that are ideal for people who don't always know what they're doing, people who are dangerously smart. The only way to damage it is to get it hot enough to melt the internal components. It happens… but it means there is a lot of room for error.
A transistor is the opposite. If a transistor exceeds specifications for one millionth of a second, it will be destroyed. If tubes are like football players, transistors are like professional dancers. They are very strong and powerful, but landing incorrectly can cause serious sprains. As a result, there is a big difference between high-power vacuum tube equipment and transistor equipment. To cool the vacuum tube, place a fan next to it. Cooling a transistor that generates 500 watts of heat in an area the size of a dime requires heavy copper spreaders, huge heat sinks, and multiple fans. A vacuum tube amplifier is a box with a large power supply, large vacuum tubes, and output circuitry. Transistor amplifiers have all of these functions plus computers, sensors, and other electronics that can shut them down if something goes wrong. Many adjustments that used to be made by turning a knob have been automated. It's easy to see automation as a convenience, but it's actually a necessity. If these adjustments weren't automated, the transistors would burn out before they could go on air.
Learn faster. Take a deeper dive. Look further.
Software is undergoing similar changes. The early days of the web were simple: HTML, minimal JavaScript, CSS, and CGI. Applications are clearly becoming more complex. Databases, middleware, and backends with complex frontend frameworks have all become part of our world. Attacks on all kinds of applications are becoming more common and severe. Observability is the first step in a “transistor-like” approach to building software. It is important to ensure that you can capture enough relevant data to be able to anticipate problems before they occur. Capturing enough data for post-mortem analysis is not enough.
Although we are moving in the right direction, the risks are higher with AI. This year we will see AI being integrated into all kinds of applications. AI brings many new challenges for developers and IT staff to deal with. The list starts with:
- Security Issues: Whether maliciously or just for fun, people will want to make AI behave incorrectly. You can expect racist, misogynistic, and simply false results. And you will realize that this is a business problem.
- Additional security concerns: We have seen that AI systems can leak users' data to other parties, either “accidentally” or in response to malicious messages.
- More security concerns: Language models are often used to generate source code for computer programs. That code is often insecure. An attacker could also force the model to generate unsafe code on command.
- Freshness: Models eventually become “stale” and require retraining. There is no evidence that large-scale language models are an exception. Language changes slowly, but the topics you want your model to understand do not.
- Copyright: These issues are only now beginning to be resolved through the courts, but AI application developers will almost certainly face some degree of liability for copyright violations.
- Other Responsibilities: We are just beginning to see laws on privacy and transparency. Europe is the clear leader here. Regardless of whether the United States has effective laws regulating the use of AI, companies must comply with international law.
That's just the beginning. My point is not to enumerate everything that can go wrong, but rather that complexity is increasing to the point where direct monitoring is impossible. This is something the financial industry learned a long time ago (and continues to learn). Algorithmic trading systems must continuously monitor themselves and alert humans to intervene at the first sign that something is wrong. There should be an automatic “circuit breaker” to terminate the application if the error persists. And if these other methods fail, you should be able to terminate it manually. Without these safeguards, the results could be like Knight Capital, a company whose algorithmic trading software made $440 million worth of mistakes on its first day.
The problem is that the AI industry has not yet learned from the experiences of others. We're still moving fast and doing disruptive things, while at the same time transitioning from relatively simple software to software (and yes, I consider large-scale React-based front-ends with enterprise backends “relatively simple” compared to LLM-based applications). Software that entangles more processing nodes, software that we don't fully understand how it works, and software that can cause damage at scale. And like modern high-power transistor amplifiers, this software is too complex and fragile to manage manually. It's not yet clear whether we know how to build the automation needed to manage AI applications. Learning how to build these automated systems should be a priority over the next few years.