Two Possible Paths for the Future of Artificial Intelligence

AI making decisions for us VS helping us make more informed decisions

Creative Commons License

Artificial Intelligence has come a long way from the time the concept was born in the 1950’s. Every day, companies are working to make better algorithms and more powerful machines to handle more data. Some of the uses of AI we see today are smart, but it is all still in the phase of Artificial Narrow Intelligence (ANI). This means they can create models that are good at predicting the specific things we train them for. In the future, it is very possible (many say definite) that we will reach Artificial General Intelligence (AGI), where machines are as well rounded at thinking as humans. Soon after AGI, the AGI machines would probably be able to create Artificial Super Intelligence (ASI), which are machines that are smarter than humans.

ANI has helped humankind in many ways. In the medical field it has helped in drug discovery, diagnosis, and treatment options. It has also made our daily lives better with things like better weather forecasts and answering our questions with software like Siri. AGI would improve the prediction that machines could make with their ability to think like humans while processing much more data than we can. I believe that ASI is where it starts to become more unsettling. ASI is the concept of machines that are smarter than humans, and they would most likely be developed by AGI. ASI may turn it into more of a “black box” situation, in that we give it a question and it gives us an answer, but nobody knows how it got to the answer.

There is a lot of controversy in whether AI should be stopped or limited at some point. I believe that with humans’ curiosity and strive for innovation, we will continue to improve it forever. Instead of limiting its possibilities, I think there are a couple of rules we can follow while using it. One rule would be to always use AI for predictions, and not for making final decisions. This does not mean that we cannot let it suggest solutions to problems, but that we use the suggestions as possible solutions. For example, in the drug research field, a researcher would still do their own research and testing on a possible drug that has been discovered by AI. Using it in this way would give the researcher more time to research and test drugs instead of also having to find the potential drugs to test. Another rule to follow is to not let AI turn into a “black box”. This is important because we should always have an idea of how and why a machine made the prediction. Because of this, I believe ASI can be a little scary. A prediction made using ASI could become more of a “because it’s smarter than us” instead of something humans set up to help make predictions. We would never know if a prediction is missing a key part of the reasoning process to make a well informed decision.

Creative Commons License

The diagram above is a good example of using AI correctly. This diagram shows a flow for an enterprise making business decisions. The flow would be similar for any type of research though. Data is collected, cleansed, analyzed, and then a decision is made. The important part is at the “suggested decisions” node. Possible solutions should always weigh by professionals in the field being researched, and not just automatically accepted.

There are endless possibilities where AI could help to improve the lives of humans. It may even be our best hope at helping to solve some of our toughest problems in the areas like disease, climate change, and energy. With these endless possibilities also come many directions that AI could go in the future. AI should always be used as a prediction machine, not decision machine, and we should always have a grasp on how it is coming up with predictions. I believe by following these two rules for using AI, it will ensure we get the most benefit from it without it negatively impacting us.