Latest

Blogs

What Price Technology?

Here’s a thought experiment: imagine a crazy person on the street ranting that the world is going to end in 7 days. The only people that can even hear what he’s saying are the ones in close proximity to him as they pass by, and truthfully they probably don’t give him much thought. Now imagine that an accommodating passer-by gave that same person a megaphone - admittedly a simple and antiquated example of modern technology, but bear with me.  Now that person’s dire forecast of the future is much louder and reaches a lot further.  Furthermore, people are more inclined to take someone with a megaphone seriously, as megaphones typically connote authority of one kind or another.  Maybe a few people even begin to take that person seriously, or at least get momentarily anxious that they could be right. I think most reasonable people would agree that in such an instance, that particular megaphone is not being utilized in an optimal way; but is the person who designed that megaphone responsible for the way it’s being exploited?

The responsibility of the designers, manufacturers and distributors of technology to the consumer is being debated more frequently and contentiously than ever before.  The Weekly, a daily (just kidding) television program produced by the NYT, recently aired an episode focused on the way extreme right-wing political figures in Brazil have employed YouTube to spread their divisive messages of bigotry and hate.  Many of those YouTube “celebrities” have even ascended to political office; the power of that particular megaphone is that forceful and far-reaching.  In its own defense, YouTube describes itself as a community message board of sorts; they just provide the cork board and the tacks, what people post on it is entirely up to them. At issue is the way the company’s propriety algorithm groups videos together and disseminates them to its users. The algorithm’s critics claim it creates a kind of echo chamber in which extremism becomes normalized through sheer saturation, steering users from one controversial video or channel to the next. In its own defense, YouTube claims that its algorithm is designed to direct users towards content that it supposes they’ll enjoy based on content they’ve already viewed, and that they’ll strive to adjust the algorithm in the future to avoid some of the repetitive issues media members and social activists have brought to light regarding the algorithm in the future. Could the algorithm’s designers possibly have foreseen it functioning that way? And what responsibility do they bear for it now?

Technology.jpeg

YouTube exists on the ethereal plains of the worldwide web; let’s take a look at more corporeal technology, the kind that can do actual bodily harm when employed in a careless manner. Tesla, the most famous and successful of all electric/automated vehicle brands, has been in the news many times in the last handful of years in a way the company surely can’t feel good about.  Tesla’s Model S, Model X and Model 3 vehicles all come equipped with a self-driving mode, or “autopilot.”  Now if you’re anything like me, you find the idea of automobiles driving themselves around on the highway terrifying (unless, of course, said vehicle is voiced by Mr. Feeney from Boy Meets World), but that perception doesn’t match up with reality; studies (in admittedly small sample sizes, as the number of self-driving cars on the road is still relatively small) show that auto-piloted vehicles are less likely to get into accidents or collisions with other vehicles than their human-driven counterparts. Furthermore, Tesla’s autopilot technology still requires the driver to have at least one hand on the wheel in order to function, a feature that would seemingly protect the vehicle/driver in the case of an autopilot malfunction. Tesla’s woes aren’t that easily brushed aside, however; do a quick YouTube search (I know it seems like I’m coming down pretty hard on YouTube, but this particular connection is mostly coincidental.  For the record, I enjoy spending hours at a time YouTube surfing as much as the next guy) for “tesla autopilot hack” and you’ll find dozens if not hundreds of results in which enterprising scofflaws delineate how to fool the automobile’s autopilot system into thinking the driver’s hand is on the wheel. There are even products you can purchase designed to fool the system, like the Autopilot Buddy.

Tesla is very clear about the ways in which its autopilot system should and should not be used; it’s intended to assist human drivers, not take over for them altogether. And in Tesla’s case, unlike that of the megaphone, the users in question are actively trying to subvert the intended purpose of the autopilot feature.  A megaphone is supposed to make people’s voices louder, after all; but Teslas aren’t designed (at least according to the company) to be driving themselves.  When users (drivers, in this case) are putting themselves in peril by using technology in a way it wasn’t designed or intended for, how responsible can the designers of that technology really be?

As the growth of artificial intelligence and machine learning continues to accelerate and a future like the one luminaries like Stanley Kubrick and Isaac Asimov foresaw gets closer and closer to becoming a reality, the issue of technological responsibility will move even further into the foreground. Questions like the ones asked earlier of the coders who wrote YouTube’s algorithm and the designers of the Tesla’s autopilot system may come to define the ongoing technological revolution in the years to come, and will almost certainly become more and more difficult to answer. Though we’ll probably always think of technology in the context of metal and wiring, it’s truthfully more like a living, breathing organism that can grow and evolve in strange and unexpected ways.We’ll look at the ways technology meets moral and ethical responsibility more in the future, and maybe even try and answer some of those seemingly unanswerable questions along the way.