Artificial Intelligence: Building a Community for a Smarter, Safer AI
“We find ourselves in a thicket of strategic complexity, surrounded by a dense mist of uncertainty.” — Nick Bostrom
After two long winters that saw minimal innovation in this field of technology, Artificial Intelligence has made its way back into public discourse. This time around, it comes to us in the form of a program that assists you as you complete your daily work, practically extending your brain into your device. It remembers things for you and does busy work that previously would’ve taken you orders of magnitude longer to complete.
As we learn more about this technology’s seemingly limitless capabilities, we naturally can’t help but speculate about just how powerful it might grow to be in the near future. We are at a new frontier and find ourselves in the dense mist of uncertainty that Nick Bostrom vividly depicted in his book, Superintelligence. Such a degree of uncertainty begs the question: How can we keep up with a rapidly evolving AI technology in order to build the safest and most inclusive tool possible?
To answer this question we can look to our nation’s past, as this isn’t the first time our way of life was disrupted by groundbreaking technology, and as Mark Twain once said, “History never repeats itself, but it does often rhyme.”
Photo Credit: National Park Service
Our nation’s history of inventing is best known for its creative destruction, rapid iteration and the tendency to embrace risks and make sacrifices in order to speed up mass adoption. When trains derailed as we connected the continent with railroads, and as steamboats exploded while traveling up the Mississippi River, we learned and adjusted our inventions to make them safer and more scalable. Our ability to innovate paved the way for today’s workhorses in transportation and logistics. Currently, as billions of dollars are being poured into Artificial Intelligence, the potential catastrophic disasters that lie within may have consequences much more dire than anything we’ve seen in the past.
Subheading: AI’s Potential Ethical Concerns and the Risk of Losing Control
AI’s potential catastrophes primarily lie under two umbrellas: ethics and optimization power.
Ethical issues have already arisen in the form of consumer exploitation and potential biases in data. Corporations are using consumer data without consent to train models and hidden biases are appearing as a result of the unintended, inaccurate, and potentially harmful responses. This is just the tip of the iceberg in regard to ethical dilemmas presented by AI.
The transition of optimization power superiority from man to machine is a potentially dangerous development that could have dramatic effects greater than we could imagine. Experts in the field are cautious about the risk of such a rapid increase of optimization power. Notably, Sam Altman, CEO of OpenAI and the current face of the Artificial Intelligence movement, warned of a model’s potential capability of self replication and optimization as he faced the Senate during a hearing earlier this year to call for a clear regulatory AI framework.
As opposed to steamboat explosions or train derailments, AI poses the threat of the birth of a sentient being that is thousands of times smarter than us, and grows in intelligence exponentially faster than we do as a civilization.
Photo Credit: Superintelligence
So what does this mean for small companies with good intentions using this technology to build out products and services in their respective industries? Would they need to hire an entire ethics review board and an intern with their hand hovering over a kill switch in case one of their servers evolves into a self-aware being, grows a pair of legs, and runs off into the wild? While that situation may be a stretch, there certainly are steps we can take to ensure that this technology is built and implemented safely and with a diverse set of data from a variety of sources. One solution is to build a community with a range of opinions, perspectives, and training data to ensure no one is left behind this time, unlike previous innovations.
Subheading: The importance of community as AI continues to evolve
Abstract recognizes the potential risks and ethical dilemmas presented by AI. In our attempt to safeguard against these possible risks, Abstract has built a knowledge-sharing community dedicated to creating the safest and smartest tool for working with government data. This community formed after multiple conversations with industry specialists and companies practicing creative destruction, all with the common goal of creating a knowledge base for safe and efficient methods in building with this new technology.
We understand that working with the everyday hobbyist developer, companies in adjacent industries such as Trellis, and industry veterans in the government affairs scene will allow safer and smarter AI-powered services to be created.The goal is to leverage the insights and perspectives of everyone involved, as well as the nuances and tricks of the trade from those who have been in the industry for decades, in order to empower the rookie that’s just getting their foot in the door. It takes a diverse community to serve a diverse community.
AI services may begin as a complement to existing work, but as AI communities develop models that are even stronger, these services will begin to shift the expectation of SaaS. Over time, it will evolve from an online tool that helps with minor tasks, to a virtual employee that completes all of your office work for you. For Abstract, these increased capabilities will enable Abstract users to more freely roam the halls of their legislature and build connections, represent under-served trade workers and citizens, and build the ideal America that set them on their career paths.
The potential of AI has yet to be fully understood or actualized, but as long as we continue to learn from the past and safeguard against potential risks, we can build tools that revolutionize our lives and careers.