An Overview of Ethical Considerations

Throughout the last few months, I’ve been regularly posting my thoughts about the current and future states of technology. I’ve touched on quite a few themes in this time, but I’d like to take some time to re-examine and emphasize the most important ones. As I consider technology from within a liberal arts perspective, I am concerned mostly with technology-human interactions and the ways in which they form our culture. The specific computer science behind the interface does not concern me as much as the implications and consequences of said interface.

I believe the most common issue I’ve touched on is that of ethical concerns regarding technology. One could easily make the argument that this is the one of the most important issues in the field of digital studies. More specifically, how can we balance the rights of consumers with the rights of corporations? This question has led me to variety of places. In my first blog post, I wrote of the rights that Google and Facebook users had regarding transparency and knowledge of data centers. Should companies be forced to supply consumers with the information they request? How can companies make reasonable accommodations for such requests? I believe I do my best in all posts to become the voice of reason and to resolve these conflicts in a non-partisan, moderate way. In all cases, both the consumer and the corporation have certain rights and responsibilities that must be addressed.

For example, in my fourth blog post, I discussed Equivant’s COMPAS: the algorithm that predicts recidivism among charged defendants. The algorithm has been accused of being racist and of being unfair to defendants. In this case, defendants have a right to a fair trial and a fair assessment of flight risk and recidivism potential. If the allegations against COMPAS are true, then the defendants are being robbed. Equivant has consistently refused to provide the algorithm’s internal working for analysis, citing intellectual property concerns. While I believe this is valid, the stakes in this case are higher than most, and so Equivant should at least provide the algorithm to a group of unbiased academics for analysis.

Above: The majority of world considers internet access a human right. How does this change our discussion of ethical considerations?

Over the past few weeks, my posts have varied in topic, but my overall mission has remained the same. In each case, some aspect of ethics is considered to some degree. I believe this is a testament to the overarching need to be cognizant of human needs and rights when inventing and using technology. This is especially important now that the UN has declared internet access a human right. When conflict arises in this new field, both sides must be heard, and a fair judgment must be reached. Sometimes consumers will not have as much freedom as they desire, and sometimes corporations have more responsibilities than they can handle. The era of technology is a learning process for all of us, and we must treat it as such.

 

How to talk about the future of AI

Within our class discussions over the past two weeks, we’ve touched a great deal upon the pros and cons of using algorithms and artificial intelligence for a variety of purposes. As the module progressed, the conversations we were having became darker and more concerned with the negative aspects of such technology. Dr. Sample even asked us to come up with the fastest way to destabilize the world using algorithms! I think that while this sort of critical thinking and pessimism is necessary to think freely, I also believe that many of the scenarios we discussed are unrealistic.

It seems unlikely to me that, in the near future, algorithms and AI will be the cause of the downfall of humanity. To reiterate, I would think it is much more likely that a simple communication error would trigger a nuclear war (see the September 1983 Incident) than an AI-enabled computer would cause an economic collapse. So far, it seems as though AI is well-behaved and easy to control. Like most things, it could be manipulated and turned into a weapon, but I do not see this as a cause for alarm. Skepticism is always necessary when evaluating new technology, but we should not rush into doomsday-scenario talks.

Suppose I’m wrong and that AI has the potential to cause global catastrophe. What would that look like? We’ve talked about this a few times in class, and I’ve noticed that we are often tempted to use cinematic interpretations as a foundation for imagining this scenario. However, none of the directors of Black Mirror or Resident Evil have better ideas about what this would look like than we do. When we discuss ethical dilemmas and the pros and cons of a technology, our discussions should be down-to-Earth. For example, I’m certain that the State Department and the Department of Defense gave no credence to the scenarios presented in Dr. Strangelove or Fail Safe when writing the US’s official policy on nuclear weapons.

DR STRANGELOVE

Above: A screenshot of Dr. Strangelove (Peter Sellers) from this website.

Overall, I feel that AI has more potential for good than for evil. When we discuss both, we should do our best to take fictionalized ideas of the best- and worst-case scenarios at face value and use our own intuition and experiences to fill in the gaps.

Using Algorithms to Sentence People?

When it comes to solving difficult problems, technology can be of great use. Everybody knows how useful technologies such as calculators and computers can be if you are trying to find a solution in the math and science fields. For the most part, these types of calculations merely provide us with some basic information, deduced from a variety of parameters, that we can then use to continue an experiment or prove that we found something. While I am not an expert in computer science, I am well aware of the utility of algorithms to everyday life and to complex issues in the scientific and digital worlds. But how far does their utility extend?

In Algorithms of Oppression, I was struck by the mention of a sentencing software called Compas developed by a company once known as Northpointe (stated incorrectly as the name of the software in the reading on page 27) and now known as Equivant. This software takes into account the answers to a 137-item questionnare that covers most aspects of a defendant’s life and then spits out a number that correlates to the defendant’s risk of re-offending. Judges can then choose to take this information in account during sentencing.

Above is a screenshot taken from Equivant’s website. After selecting “Judicial Officer” from a variety of roles listed on the site, I was directed to this page which tries to convince me that the algorithm will make sentencing less of a burden.

One of the problems I have this system is that defendants and their lawyers are not allowed to see the information that is put into the algorithm nor its result. Judges themselves do not even have access to the algorithm (presumably for trade-secret purposes). Even more interesting, the algorithm has only a 65% success rate at correctly identifying repeat offenders. How can such a process be legal? Because Judges are supposed to use this as one of many factors in determining a sentence. In Algorithms of Oppression, the author spoke a great deal about the racism and sexism unknowningly imbedded in many search engine algorithms. This truth extends to Compas as well as a study analyzing criminal information from Broward County, Florida showed that the algorithm is biased against African Americans. Such a revelation greatly complicates matters as it is certainly immoral to use such an algorithm to sentence anyone. So how does the justice system go forward? I would suggest that the algorithm be improved until it can more accurately predict recidivism without racial input. Sentencing, since it has historically involved a human making a decision, has always been tainted with prejudices. Now with technology, we have an opportunity to effectively eliminate or minimize them, thus creating a more fair process for all.

Nosediving into Subtle Societal Criticisms

One might say the entire point of the Black Mirror is to make viewers see themselves as characters, causing them to critically examine the role which technology plays in their lives. As a fan of the series, I can’t recall a single episode which I felt completely comfortable while watch. Nosedive is no exception. During the episode, I felt uneasy watching the awkward, clearly forced interactions between Lacie and random strangers. While not the series’ best episode, Nosedive paints a not-so-crazy picture of what can happen when app that lets users review other users get out of hand. In the episode’s dystopian setting, it seems that no interaction between people is genuine; many of the characters do their best to appear pleasant in order to heighten their score on the unnamed app. What’s more, users’ scores on the app are corelated to their social standing and can be used to determine things like rent rate, airplane seating and even preference for life-saving organ transplants.

Much of the episode appears to be critical of such a system where people can rate their interactions with other people. In fact, there is an application called “Peeple” where users can rate other users and even post reviews; the company even called its app the “Yelp for people.” Soon after the company announced the app in 2015, it was met with a slew of harsh criticism by people who saw the platform as one for cyberbullying. Defenders of the idea say that the app would create an incentive for people to improve their character and thus improve overall quality of life for others around them. Eventually, Peeple was released with major modifications to its original idea in early 2016. Nosedive was released later that year and appears to have drawn some inspiration from the app. I wonder what the founders of Peeple, Nicole McCullough and Julia Cordray, would say about the Black Mirror episode.

Additionally, I noticed some other, more subtle societal criticism. One of the scenes that struck me was Lacie’s meeting with the realtor during which she was informed that she would need a 4.5/5 rating or above to qualify for a program that would reduce the rent by 20%. In a sense, the app’s rating system is like a modern-day credit score. People with low scores tend to pay more for the same housing product. However, the credit score is based in past credit transactions that would logically be able to predict the future behavior of a lessee. The app accounts for only social status and others’ qualitative ratings. I believe that the writers of the show were trying to paint the credit score system in a critical light. It is true that a single bad financial decision can haunt a person for decades, often without their knowledge. But there is some benefit to the system as it prevents landlords and banks from getting defrauded. Contrary to popular belief, it is in the bank’s best interest for you not to default!

Another interesting moment for me was the conversation Lacie had with the truck driver whose husband was denied a live-saving experimental procedure because he had only a 4.3 rating (the man whose life was saved had a 4.4). This brings to mind the transplant waiting list, where sick people are ranked using several metrics to determine their eligibility for an immediate organ transplantation. If some needs a liver transplant, that person’s age, disease progression, and alcohol/smoking history can all be taken into account to decide if that person will receive the life-saving procedure. Should this be the case? I would hope that most reasonable people can agree that social standing should not be taken into account (neither should religion, employment status, race or anything similar). This particular issue boils down to a supply and demand problem, but the writers of Nosedive seem to have some subtle criticisms of the process.

Should we be paid to see ads?

I’d like to talk about a comment that someone made in class on Friday following our discussion about consumer data collection. To be completely honest, I have no idea who made the comment; I just know that it was made towards the end of class and that I didn’t have time to respond.

In the midst of a discussion about consumers’ data being collected and sold to marketing companies, Dr. Sample reminded us that for each click an advertisement receives, the website hosting the advertisement makes a small amount of money. Someone mentioned that since advertising revenues are determined in part by the number of clicks an ad receives, it should follow that consumers themselves should get paid a sum to have their data collected and used for such purposes. I disagree with this notion and find it a bit preposterous.

Consumers are provided with the internet for free (besides hardware and connection costs). All of the knowledge that humanity has are at our fingertips, just seconds away via Google. In addition, an entirely new way of communicating was invented by companies such as Facebook and Twitter. Amazingly, all of these services are completely free to use, decades after their invention. If Facebook wanted, they could charge a monthly fee to members. The same goes for Google and other companies. These businesses make a great deal of money off advertising, so to pay consumers for viewing ads is a cop-out. Consumers already get free access to these sites; what more do we want? If viewing a few ads on every webpage is the price to pay for unlimited, instant knowledge of every topic imaginable (in addition to worldwide communication), then that is a STEAL. There are some companies that pay you to watch ads or take surveys. The below screenshot is from Wordlinx, accompany that pays you to share links with friends and watch ads. They are essentially paying you for your data, so there must be some economic justification for doing so.

This discussion reminds me a bit of the reading in FEED. Violet mentions that her family was too poor to give her a FEED when she was born, so instead she got one when she was seven. From my perspective, it seems as though a plurality (if not a majority) of the FEED is purely advertising. Again, this begs the question of whether the internet should be free. Maybe Violet and Titus should get paid for listening to upcar ads. Because the price of getting a FEED is so steep, why are there ads? Is there a premium version of FEED without ads?

I personally like having ads tailored to my interests; I would rather see ads about phone cases, shoes and swim gear than about community college or auto parts. That being said, a father discovered his daughter was pregnant through Target ads in his email. Where do we draw the line?

The Truth about Transparency and What to do Next

With the rise of the technology companies to become some of the most powerful institutions on Earth, the call for transparency has become louder and louder. Transparency in labor practices seems to be a key component of activists’ demands for all companies. In order to appear more worker-friendly and environmentally-conscious, the technology companies have elected to advertise their physical presence. To many, the internet itself feels as if it lacks any sort of physical presence. When I think of a Google employee, I picture a nerdy programmer sitting on an exercise ball in Palo Alto, CA. I certainly don’t think of the workers tasked with keeping the dozens of NC data centers up and running. However, without these very workers, the internet wouldn’t exist today in the capacity that it does.

Even a few years ago, if you were to ask someone where the internet was, you might get some puzzled looks. That’s changed with companies deciding they want to publicize the physicality of their internet servers. Colorful pictures showcasing what seems to be an ideal work environment for employees are great for advertising a technology company’s presence. Combined with environmental statistics claiming that any said data center is “carbon-neutral,” these photos make for a powerful argument in favor of the said company. But let’s be transparent about transparency. There is nothing that requires these companies to be forthright about their data centers. They reserve the right to be as unspecific as possible with regards to their energy consumption as they want. And quite frankly, we as consumers don’t have any right to know their practices. The simple fact that there are very few alternatives to Google, Apple and Facebook means that they can do what they want with their employees, their data centers, and our privacy with little fear of repercussions. Google, as Holt and Vonderau mentioned, is notoriously secretive with regards to their infrastructure. Presumably, they have some trade secrets that they want to keep to themselves. This brings up a whole host of intellectual property issues that don’t seem quite relevant to this discussion. Suffice it to say that this means everything to companies whose entire premise is built on innovation.

So…what’s left for these companies to innovate with regards to their data centers? If they don’t have to be transparent, at least they have to keep coming up with new ideas to maintain their stock prices. Let’s start with energy. The fact of the matter is that these data centers are extremely demanding on the energy grid. So what can be done to reduce their carbon footprint? I don’t want to discuss renewable energy, since that is another discussion in and of itself. I want to show a graph from Gartner, a research and advisory firm.

This graph is known as the 2017 Gartner Hype Cycle. It compares a variety of developing technology and places it at a specific place on the curve based on its expectations and its current stage of development. I want to draw attention to two specific points. The left-most technology is known as SmartDust; essentially, this technology involves using hundreds of microcomputers, some no larger than a grain of sand, to detect variations in light, temperature and location. Each microcomputer can communicate with one another, and thus a network can be created in a space the size of your fingertip. This technology is extremely efficient and could be utilized in the future by companies like Google if development continues. The next point I’d like to show is Quantum Computing, which is essentially using quantum mechanics to perform calculations. This type of computation would allow more data to be stored on smaller and smaller devices as silicon would no longer be used (as a result, transistors could hypothetically be reduced to the size of an electron). Such developments would allow more data to be stored at said data-centers and would presumably reduce tech companies’ carbon footprint. Unfortunately, many of these developments are several decades away, but it seems to me that the answers to tech company woes can be found in these ideas.