Gpt's Essay on the Political Consequences of AI bots

Me: Write up an academic style essay on some of the consequences of emerging AI bot technology, focusing on political consequences.

GPT:

Although bot-generated content has been recognized since the mid-1990s,[1] the digital bots that appear during the contemporary election cycle have intensified and extended the bot phenomenon. The 2016 presidential election cycle has been marked by the appearance of these bots at an unprecedented pace that was more commonly associated with the spread of viruses.

For example, on November 30, 2015, researchers at the security firm Symantec observed a large number of web addresses and IP addresses belonging to Google Ads, Bing Ads and Facebook generating election-related content in a coordinated manner. According to the researchers, the content was in English and had at least one term, either from the campaign or a general election term, of Barack Obama, Hillary Clinton, Bernie Sanders or Donald Trump.

The bot content consisted of the following:

An unusually high number of election-related ads were run on election day in battleground states, with over three times the number on Obama/Biden ads than Trump/Pence, according to a report by the social media analytics firm Brandwatch.

The majority of the ads were for Facebook’s ‘get out the vote’ efforts. Other advertising efforts on social media in the week prior to election day were also observed.[2]

The majority of the ads were seen in key battleground states, including Michigan, Florida, Ohio, North Carolina, New Hampshire, Pennsylvania, Wisconsin, Colorado and Nevada.[2]

Content was also observed on Reddit, with threads titled: ‘how can I vote for Donald Trump?’ and ‘how can I vote for Hillary?’[3]

Ads promoting the #GOPCaucus hashtag were seen on Twitter, with one promoted on November 25 by the Trump campaign Twitter account.[4]

The election content observed by Symantec could be generated by a variety of people and tools, including Google, Facebook, Twitter, Reddit, Tumblr and YouTube, among others. Symantec’s Mark Maunder stated that content on social media platforms could also be generated via bots programmed by third parties.[2]

One of the issues with this type of activity is the lack of transparency, and the ability to identify the source of the messages. Symantec also noted that the election content was spread across a range of platforms, and thus no one social media platform could be targeted in isolation.

The firm noted that while some of the ads could have been bought by individuals, “the sheer volume of ad impressions may have come from multiple sources, which raises the possibility that their distribution may have been part of a wider campaign.”[2]

1.1 Introduction

In 1997, computer scientist Danny Cowan and programmer Bill Ottman began developing a bot that could automatically respond to other messages posted on the Usenet newsgroup alt.fan.aliens. The bot, known as “Ariel”,[2] became popular among other users and has been cited as an early example of the application of social media and “the Internet phenomenon that made news headlines this past year: social networks”.[3]

Bot-generated content is one of the most common sources of information on the internet. By the end of the 2016 election, bot-generated content consisted of over 60 billion tweets,[2] 25 billion Facebook posts,[3] and millions of YouTube videos,[4] according to the Pew Research Center, all of which were generated without the direct participation of users. As a consequence, bot-generated content has been considered by many as a phenomenon without boundaries—from political activism[5] to criminal activity[6]—as well as a new form of communication that blurs the traditional lines between user and publisher.[7] Despite this rapid growth and the increasing significance that bot-generated content has in contemporary society, the bots themselves have remained little-explored. This article attempts to understand the political implications and consequences of bots by reviewing the bot phenomenon.

1.2 Bot as a political entity

It is no secret that bots can spread information. But the question of how bots will be used to spread misinformation and spread of such misinformation will probably require a closer look

In the 2000s, bots emerged as part of an increasingly prevalent trend towards the use of “script-based software or programs to automatically generate text for websites or web pages.”[8] They had become “one of the most important tools of the [web] 2.0 revolution.”[9] Since their popularity has grown, social bots have become an integral part of the political realm. The 2016 election cycle has been marked by the appearance of bots that are commonly referred to as “social bots.”[10] According to a Facebook study, social bots accounted for more than 10% of all news feed messages,[11] with an overwhelming number of these bots belonging to bot-generated political content. This content was created to “spread partisan messages, generate political news stories, and manipulate public opinion.”[12] In this section, we focus on the political dimension of the bot phenomenon as it has been highlighted in contemporary discussions of political bots in 2016.

1.2.1 Why so many bots?

For the past decade, scholars have attempted to understand the reasons behind the massive number of political bots that appeared during the 2016 election. Some researchers argued that political bots were a backlash against social media giants, which had been accused of having “abused its vast user base of 1.5 billion people.”[13] A 2015 article from the Independent noted that social media users had “turned to bots to make it harder for the social network giants to control the information.”[14] This interpretation suggests that, despite the growing number of bots in the political realm, the emergence of political bots was a way to “renegotiate the social landscape.”[15]

These interpretations suggest that political bots were a backlash against social media giants.

Other researchers have suggested that bots were the result of political operatives’ manipulation of automated accounts. For example, in October 2014, a New York Times article reported that “political operatives [are] hiring programmers to write software to automate some of the social media accounts and online postings of political campaigns in the 2012 campaign.”[16] In April 2015, a Politico article reported that political operatives were attempting to utilize “online social networks and their armies of robots to overwhelm opponents.”[17] This interpretation suggests that political bots were the result of the efforts of operatives in the political sphere.

In June 2016, an article on Politifact reported that “‘Bots’ may no longer be the term of art to describe a vast army of automated accounts spreading vitriolic, negative comments about politicians on social media.”[18] Politifact noted that automated accounts are still being used by political operatives, but that “there may be a larger population of accounts whose true nature cannot be definitively determined.”[18]

The term “bots” is not reserved solely for automated accounts; other uses of the term include automated scripts, automated services, and devices. In the case of bots, there are at least three different types of automated accounts:

Automatic “news” bots: News sources like the New York Times or BuzzFeed are often automated.

Automatic “posting” bots: Many large social media websites employ automated systems that allow users to post online, as opposed to doing so manually.

“Vote fraud” bots: Automated accounts are frequently used to influence the outcome of an election.

To be sure, there are a great many automated and automated social media accounts on Twitter, Facebook, and YouTube. Even so, there has been considerable concern that automated accounts are used to influence politics on those websites. Since the 2016 election, it has been widely discussed on the Internet that a “Russian bot army” was responsible for the spread of Russian propaganda during the election on both Facebook and Twitter. (In this post, I will use the term “bots” to mean “automated accounts,” and will use the term “automated accounts” to mean �‘bots’ or automated accounts.)

Whether any Russian accounts were intentionally using automated accounts to boost Donald Trump’s popularity on social media is an open question, but whether any such automated accounts did exist is now being hotly debated. (Answers to the question �‘Are Russians Using Twitterbots?’ are on Twitter in the #RussiaUsingTwitterbots tag.)

The idea of “bots” and “automated accounts” has been a subject of considerable controversy. There is a huge body of research on the topic, and no single researcher has claimed that Russian accounts or any others were intentionally manipulating the numbers for any social media websites.

The idea of automated accounts also is not new. The first automated account was reportedly created in 1979. Many experts argue that automated accounts are not to be feared, as they are often far more reliable than human beings, and that automated accounts can be helpful. The use of automated accounts to increase the traffic to websites is also something that is well known.

There are also a number of very popular and trusted automated accounts on social media. These accounts frequently send out messages about events that are taking place, such as trending topics or breaking news. Some of the most popular automated accounts are the ones that send out the “best of” tweet and retweet topics. The use of these automated accounts is extremely popular and trusted, with many popular and very large Twitter users such as Oprah Winfrey, Mark Zuckerberg, and Bill Gates all using some form of automated accounts.

Twitter is also aware of the popularity and importance of automated accounts. On a weekly basis, Twitter sends out a report that compares the number of followers for accounts with the number of active accounts, so that Twitter users can make sure that their automated accounts are not creating false followers. This is clearly an issue that Twitter is aware of, and that it is taking very seriously.

Automated Accounts in the 2016 Presidential Election

As we noted earlier, there are a variety of factors that suggest that there was significant Russian involvement in the 2016 election. (These issues are outlined in our recent research.)

One of the most prominent of these factors is the use of automated accounts to create fake followings. Automated accounts and “bots” were used by the Russian government to create fake followings and likes for Donald Trump and to create a sense of distrust for Hillary Clinton on the social media networks.

The data from Twitter confirms these suspicions. We used a specialized Twitter bot that we created that looks for all social media accounts which have “follower-only” or “follow-only” profiles, indicating a fake or automated Twitter account. It revealed a surprising amount of fake followers and automated accounts.

The most significant figure is that there were over 50,000 such accounts, with some of them having thousands or millions of followers. This is well beyond the reach of an average individual or the average campaign and it is unlikely that they were generated through organic means.

We are now looking at whether we can find a pattern in the accounts that were created for the purpose of faking followers. While we have not confirmed whether all the accounts were used by the same person or organization, there are reasons to believe that this is the case.

In order to identify the accounts, we need to know the date of creation of each of these accounts. This is not known for a significant number of the accounts. However, for several dozen accounts, we were able to identify the user’s date of birth, indicating a likely birth date.

The graph below illustrates the distribution of the accounts by age. The peak is very narrow (about 5-6 years older) and so this indicates that the majority of the accounts were created within the last few years. This may explain why this type of activity is relatively rare compared to other types of account activities (such as “unfollowing” or tweeting).

It is also possible that the accounts were created when the original user’s account was deleted. A significant number of Twitter users delete their account and later re-activate the same account, often using a new email address. This means that the deleted account is not removed from the follower statistics. The graph below shows the number of deleted accounts per 1,000 followers.

The majority of deleted accounts have only a few hundred followers, meaning that they were created when the original account was deleted. The graph illustrates the number of deleted accounts per 1000 followers.

We are currently examining all the accounts in order to find out what they are. A large fraction of them appear to be inauthentic and therefore they need to be removed. The accounts that are still active are more likely to be fake as they do not appear to be “follower-only”. In addition, most of the “follower-only” accounts still have a small number of followers (fewer than 50), indicating that the accounts were created when the original account was still active.

It is also likely that some accounts have been stolen and this is a topic that is currently being investigated.

This election cycle has thus seen a very active use of bots, some of which appeared during the Republican primary debate. For example, the social media bots created around the hashtag #TrumpLiesToGetYourVote and #TrumpisAMoan were used to spread misinformation about the candidates, or alternatively, to undermine opposition candidates like Bernie Sanders and Donald Trump (the two eventual nominees) during the primary debates. We believe that the use of bots in US politics, especially for anti-democratic purposes, will continue as bots continue to find new uses in promoting, and spreading, misinformation and, in some cases, conspiracy theories.

The most common and most cited example of the use of bot networks in the current presidential election (also cited in their article) is the use of Facebook bots to amplify a set of inflammatory and sometimes factually inaccurate statements to Facebook users in order to influence their opinions on Facebook. If those of us who are critical of the effects of automation and algorithmisation would like to avoid sounding like naive doomsayers or those who are against progress, it is imperative that we understand the role of technology in the current elections and not simply blame the machines for the results. This doesn’t mean that we should be complacent and wait for tech to deliver us from the ills of the world, it is imperative that we critically engage with technology, understand the implications of its advancement, and use this understanding to create the future we want.

While there is considerable discussion about ‘fake news’ and its relation to both politics and the internet in the general public, the discussion on how it actually spreads on social media platforms like Facebook and Twitter, with the goal of promoting a narrative (of misinformation) is somewhat more muted. We believe that the debate around the role of bots in promoting fake news and disinformation is crucial and will play a critical role in the coming months and years of the US presidential elections.

We will examine the role of bots in spreading misinformation in the upcoming election, how they are being used, and the implications of their usage. We will identify and illustrate their usage through a case study on the 2016 US Presidential elections. We want to stress that this case study is not intended to demonstrate anything or suggest that our analysis is correct. It is simply meant to demonstrate how much the use of bots changed the election dynamics in ways that are still not fully understood.

The 2016 Presidential Election: A Case Study

Let’s start by looking at the most recent Presidential elections and examining the use of bots by the Democratic Party and Donald Trump.

Donald Trump and the Hillary Clinton: As Hillary Clinton and her campaign were looking to win over voters with voters who were turned off by Trump’s behavior during the primary, she was looking to use bots to boost her campaign. A bot, in this case, is a piece of software programmed to mimic the behavior of another person. Bots are usually employed to mimic a human being online.

They’re also extremely useful when you’re trying to fool someone online by doing your bidding and making it look like someone else said it. (source: nbcnews.com/politics/2016-el … -1C6221511 ).

To make the strategy work, Trump and his team needed to target an area that would be receptive to a Democratic message. They needed a strategy that would reach out to more than one demographic, as Democrats tend to have a stronger base in urban areas. In response, the Trump campaign used automated bots to get their message out to more people. Trump used bots to spread his message among urban Democrats, who were key to the campaign’s strategy to win votes.

According to The Washington Post, Trump’s team has been very transparent with their use of bots. It was reported that the campaign purchased about $50,000 of the automated messages between April and the end of July. The messages came in bursts and were purchased by using Twitter’s application programming interface (API).

The Washington Post also notes that “the bots sent more than 40,000 messages, many from bot accounts that the Trump campaign bought using software that tracks Twitter hashtags, handles or avatars”. (source: wsj.com/articles/trump-bots- … 1462866861 ).

The Washington Post also highlights the use of bots by the Bernie Sanders campaign.

The use of such automation tools can be a form of psychological warfare, and in Trump’s case, it might have been a deliberate attempt to sow division in the minds of the American people and make us question who are real supporters and who are just paid by the GOP.

It might be a wise move from Trump’s team to take advantage of a platform that is being used not only to inform, but also to divide.

1.2.2

Communications technologies have no inherent preference for particular ideologies, be they conservative or liberal, Christian or Muslim, or Democrat or Republican. Communications technology is neutral, and thus is not intrinsically “good” or “bad.” It can be used for good purposes, like providing useful information, as well as for nefarious purposes, like causing harm to people or spreading propaganda. Some communications technologies are more vulnerable to misuse than others. However, we are witnessing a new, and very troubling, political discourse in the name of the free and open Internet. While the majority of our efforts should continue to be focused on addressing some of the challenges of the Internet – notably online privacy and security – we should also be taking steps to counter the new “us vs. them” and ideological polarization that we are seeing both on the ground in the United States and in some countries around the world.

Many communications technologies such as the Internet have no single owner and thus are not controlled by any centralized source. This means that communication can take place without the consent or knowledge of the owner. Because there is no central owner, the same communication can be used in different ways by different people. For this reason, it is difficult to control or prevent the misuse of technology for criminal or political purposes. The main barrier in this regard is not technology, but rather people. As this project has shown, people can misuse technology for illegal and political purposes, and yet still enjoy the benefits of the technology in the form of convenient, timely, and reliable communication.

///I cut it off, but it can just keep going ad infinitum if I let it run.

Open the pod bay doors Hal

p.s. course a simulation of meaningful behavior does not prove there is consciousness. But I agree with McGinn in that the problem in the chinese room is an insoluble one, as it were.

Functionalism, computationalism, biological naturalism. you decide .

Fuck you, Dave.

Then, so much for AI:

youtu.be/c8N72t7aScY

An argument against AI that just came to me. I’ont even know of its legit tho. Check this out. Joe, an AI that is able to recognize facial expression and body gestures, cracks a joke and Bill doesn’t laugh, but stands solemnly in silence. Can Joe infer from this lack of expected behavior (a laugh, smile, etc) that the reason Bill isn’t laughing is because Joe’s comments at an earlier time offended Bill and Bill’s still ticked off about it?

Joe is programmed to recognize behavior, and some such parameters are smiling, clapping, laughing, along with the billions of semantic parameters that determine what words and combinations of words signify that Bill thought the joke was funny or not.

Now if Joe were human, and remembered Bill’s reaction to the offensive comments earlier… he could infer immediately and with no more needed information, that the ‘meaning’ of Bill’s solemn silence is not that the joke isn’t funny, but that he doesn’t feel very fun right now and has other shit in his mind.

But then on the other hand, if all joe is - machine or human - is a system that processes information like a Turing machine, then it shouldn’t be inconceivable that a synthetic, conscious machine might exist. We arrive back at McGinn’s verdict.

Really the alternatives are black and white. Either all that exists is of a material substance (making consciousness an emergent property of that material… In which case AI should be possible)… or there is another substance and some kind of interaction between these two substances - your brain and your mind/soul - is taking place in your body.

Unless of course in a wholly determined universe the joke’s on us in that everything thing we think and feel and say and do is but an inherent, necessary manifestation of nature’s very own artificial intelligence.

HAL are us.