I am a Muslim Black Woman who works in the realm of Political Sociology, and as expected, often the topic of discussion in my circles revolve around power, social stratification, race, culture, politics — the usual culprits of preoccupation for someone who moves in the “Arts” world. But over this past summer, I have spent more time than usual discussing the topic of Artificial Intelligence (AI), and the potential consequences of advanced technology.
Recently, I was sent an article published by the Harvard Business Review (HBR) entitled “Stop Worrying about Whether Machines are Intelligent,” an article that jolted me to put thought to paper (or more accurately, to put finger to keyboard). Reflecting on the article, I wouldn’t exactly say I was worried about AI per se — on my laundry list of worries, it sits very low on the totem pole. Nonetheless, AI serves to be not only an interesting conversation starter, but also increasingly a development carrying very real implications for many across the globe.
So to me, the gist of the HBR article was basically that it is way too soon to worry about something like an advanced AI “takeover.” We were rest assured that at the end of the day AI is simply code, so there is no ghost in the machine to fear. Code essentially means there always is a human agent behind the machine. Bottom line, at this point in time – and in the foreseeable future – there seems to be nothing to the liking of a Sci-Fi machine revolution to worry about. And in a way there is no such thing as Artificial intelligence, considering we could say that all advancement is just a byproduct of human intelligence.
The few sentences above are of course a very rough breakdown of the HBR piece, or more accurately put, the breakdown I read (see for yourself). Following this piece, I did a little more reading on what AI is and reviewed a bit on the major ethical and social concerns. After some quick shuffling it became apparent that the most common point put forth to calm the hysteria of ‘lay folks’ like myself was almost exactly the same as what the author of the HBR piece argued – AI is code, there is no ghost inside the machine, i.e. there is nothing to fear. So as of now the idea of a machine-led apocalypse is really something we can relegate to movies and science fiction.
“AI is simply Code”, “AI is simply Code”, “AI is simply Code”….
The emphasis of the above statement got me thinking: what was it about advanced AI I found so unsettling? Was it fear of a blockbuster sci-fi-like machine takeover? Was it that the development of advanced AI would make us irrelevant? Loss of jobs? After a bit of journaling (yes, I said journaling), I realized what I am actually wary about are the PEOPLE behind the machine, than the machine itself.
The argument of “don’t worry, people are behind the tech” does nothing for me – it surely does not incite the calm it seems to be intended to produce. I mean, it is not cameras and mics that have made any sense of privacy nonexistent, but rather the tech-based surveillance systems that are being constructed by human agents. Drones are not themselves the problem, but the forces that have decided that one way to effectively utilize the tech is by strapping them with arms, and orienting them as a means of managing uncertainty and threat by flattening a village at a time.
History has repeatedly shown that greater power in the hands of already powerful people (and systems) is a recipe for not very nice things. Maybe I am not imaginative enough to worry about a machine takeover, but I am critical of power enough to worry about implementation. A quote I read recently comes to mind, “While every improvement is necessarily a change, not every change is an improvement” (Yudkowsky 2011). I feel this statement in many ways is lost in our fast-changing, progress-for-the-sake-of-progress world. The sentiment these days seems to be “all hail to progress” whether or not we are progressing off a cliff.
If we simply think of robots and Artificial Intelligence as extensions of their creators, then yeah, worrying about a machine takeover becomes kind of a moot point, but AI is STILL surely a charged issue – not in an apocalyptic sense but in a good old political and social sense. The initial questions that come to mind from a sociopolitical perspective are: Whose smarter and stronger extensions will robots become? Whether AI stands to benefit or harm us begs the question of who the “us” is in this arrangement? The question of harm is fundamentally tied to a question of whose hand of influence are the machines extending? Whose vision of the world will it sharpen? Who in our global stratification will it stand to enrich? How will it come to restructure the already highly skewed order of power we currently occupy?
From the little I know, Artificial Intelligence is very much an elite American preoccupation, and as a Black Muslim Woman a world with smarter, stronger and more extended elite white men is definitely a world that leaves me unsettled, to say the least.
—
Written by Minifre Harak
There wasn’t any need for the article-writer to stress on being this or that herself. One should be thinking about others on merit not on some skin color/sex. Regarding the wordings of her, “AI is simply code..”, it seems she lulling herself into some hallucination. One writes code or programme the things in a way that the machine or AI perform better than ever before. The aim of the code is to make something better in AI so that it can outperform the task not done by others yet. Writing a good code or programming is just a step, there are many unfinished tasks ahead and immense possibilities too.
I have to say also that the present AI programming should always be treated as precursor to something better that’s on the way. We have just entered into the realms of AI in very primitive ways, the possibilities are endless.
I love this. Way too many people never recognize this.
the Japanese are big on robots and AI. one of the reasons is that they are trying to develop a human like robot as a companion for their aging population.
so you don’t want your smart phone to predict what you are trying to text? can’t you turn that feature off? you don’t want a car that tells you when it is time to change the oil or tries to anticipate your best route? are you like the guy who said his TiVo thought he was gay?