Is the A in artificial intelligence really artificial?



Artificial intelligence is a field of study that explores how we can automate computer systems to do tasks that would otherwise require human intelligence. Its potential is truly incredible, and as AI develops, the possibilities it might unlock only increase. But how accurate is it to call AI “artificial”? Artificial is defined as something “made or produced by human beings rather than occurring naturally, especially as a copy of something natural.” So, can the so-called intelligence of AI ever occur by itself, or must it always be the result of human input?


The part of AI that will always be artificial


There is no way around AI being a human invention. Founded as an academic discipline in 1956, AI now has numerous applications in the real world, and is often used by businesses to achieve their goals. From the start, it requires programming to know what to do. AI rests on a foundation of coding that lays the groundwork for its ability to learn. This the most obvious reason that it's defined as artificial: its capability to learn is restricted by how it was designed by humans.


AI as it exists must also be directed by humans in order to be effective. Without being targeted properly, AI is aimless and won’t achieve the goals it might otherwise have the potential to. For a business, it's important to know what you want your AI to achieve before you begin using it. This means directing it with the right data towards the right goal. Otherwise it could become bloated an ineffective.


Automation and AI


The main benefit of AI as it currently exists, however, is in its ability to be automated. This is where the idea that it is intelligent comes from; it has the ability to learn from its mistakes and become better. Therefore, the question of how much it can be considered artificial depends on its capacity to improve itself.


If you’re familiar with the business models of some the world’s largest technology companies, you may have heard the word “algorithm” before. An algorithm is a series of step-by-step operations that can solve a defined problem in a set number of steps. They’re used to automate processes that can be oriented towards the interests of specific users. Importantly, they can be used so automate a learning process, which can improve upon itself and the goals of the algorithm. This allows them to become more accurate and useful than they were when they had been designed.


Some might argue that not all machine learning is AI, and vice versa, but either way it only scratches the surface of the extent to which AI can “learn”. As you go deeper into what machine learning is capable of, you’ll find information about neural networks and deep learning, which take the field to greater heights of complexity. Its an ever-evolving area of study and who knows where it might take us next?


What the future might hold


When it comes down to it, the question of whether AI can be considered artificial has huge philosophical implications that are difficult to fully explore. It is therefore a matter of how much its ability to learn can ever be fully detached from its creation by human beings. Can AI be responsible for itself and how far away is the technology from today? The applications of AI and its ability to learn are already incredible, so with the rapid evolution of technology how far can it go? Only time will tell.

16 views0 comments

Recent Posts

See All

Data science is a complex discipline that explores how to collect and apply the vast swathes of data now available to analyse. In 2013, one study found that 90% of all data to ever exist had been crea