Risk and Ethics

I examined the factors that should go into making responsible decisions around risky ventures in our current environment in my column last month and concluded:

“The general shape of risk has not changed that much in the past 15 years, but every time is new. There is no shortcut or quick fix to making consequential decisions at the right time. The elements that go into risk management are unchanging: careful, patient, and meticulous thinking.”

I talked about the failure of Silicon Valley Bank and how the easing of Dodd Frank regulations on bank size and annual stress testing meant the regulators failed to act on the size of the risk they were observing. My colleague, former Citigroup operational risk executive Howard Stein, responded:

“Any time you have exponential growth, you have high risk. These banks grew at enormous rates. Neither personnel nor systems… and in the end not experience nor knowledge… could grow in size nor quality to keep up with the growth in both size and complexity of these banks' businesses. Huge growth in a short period of time is a Rx for disaster.”

It's not just in the banking industry that we're seeing this phenomenon. James Searle noted recently that executives across technology and government seem to be happy to follow the letter of the law while ignoring its spirit. “Do the bare minimum to comply” appears to be the most utilized approach to regulation.

I've been watching the rollouts of artificial intelligence tools like OpenAI's text generating chatbot, Chat GPT. Last month, Microsoft announced it was investing another $10 billion in Chat GPT. Google has been working on its own chatbot for some years; and, like Microsoft, had pulled back on earlier versions. Google released its chatbot, Bard, last month. Both companies have specific plans for how they will imbed such a chatbot in tools such as search engines, where they are already rivals. Meanwhile, Chat GPT is estimated to have over 100 million users as of last January, and Bard has picked up 13 million users and a long waiting list since March 22. 

In a New York Times front page story on April 8, “In A.I. Race, Microsoft and Google Choose Speed Over Caution,” the focus is on how the need to dominate the arena has led each of the companies to eliminate barriers by removing staff previously in place to call out ethical issues or features during further development of the products.

“In March, two Google employees, whose jobs are to review the company's artificial intelligence products, tried to stop Google from launching an A.I. chatbot. They believed it generated inaccurate and dangerous statements.

Ten months earlier, similar concerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society…

Last week, tension between the industry’s worriers and risk-takers played out publicly as more than 1,000 researchers and industry leaders, including Elon Musk and Apple's co-founder Steve Wozniak, called for a six-month pause in the development of powerful A.I. technology. In a public letter, they said it presented “profound risks to society and humanity.”

The article goes on to explain the concerns of what had been AI ethics teams at both companies. Most of those people are no longer working with the developers, and ethics teams appear to have been disbanded. I see the challenges here as a perfect illustration of how the desire to dominate the marketplace, to generate profits, and to get there fast sometimes leads to the abandonment of good risk management principles. Such fast growth, as Stein notes, is exponential risk and “an Rx for disaster.” It's clear that no single government agency is tasked with overseeing technology development, and that means that the guidelines or the laws are not there. It is extremely unlikely that either company would agree to a six-month pause in development, and it’s also unlikely that Congress or the Executive branch could act within that period of time. 

The issue that may slip by is the intellectual poverty of the very content turned into datasets, like typical “paper” assignments in college, ‘routine’ business reports, and ‘research ‘papers that go no farther than Internet look up items. There is a messier question of regulation that goes to the real and difficult need for more rigorous views of ethics in education.

If there were a Court of Last Resort on such questions, we could insist that ethical considerations are important building blocks in the development of technology and pay more attention on the front end to those considerations. To their credit, we know that both companies previously pulled back on earlier prototype AI chatbots because of safety concerns. Microsoft and Google are not the only AI chatbot developers, but they each play critical roles today in how we ask for and receive information. I come from the side of the house that likes to understand the origins and integrity of information. We have enough groups interested in spreading disinformation without deploying powerful tools that work from datasets that may cause further disinformation or erosions in our democratic society.

Now, more than ever, we need careful, patient and meticulous thinking about what we are creating.