first time? drop dead.
come back on ur 3rd time here...

teses

cómicos

readings

dee and dum

fanzines

main comix/zines

meet tha crew

about us

AI (autenticidade inexistente)

Como prometido - e desculpem a demora, uma daquelas semanas... - autenticidade.

Começando pela tecnologia, começando por um tópico que também nos espreita - pun intented! - vezes sem conta: algoritmos. Hoje abordamos a falta dela, usando para exemplo um artigo do Guardian da semana passada, e sobre este uma nota prévia: apesar da limitação auto-imposta às referências que aqui fazemos - queremos e só recorremos à seleção artigos de media online - o artigo em questão remete para alguma literatura adicional que vocês poderão querer ler. Dito isso, seguindo. 

Um mashup em jeito de introdução:

It’s the equivalent of going into a library and asking a librarian about Judaism and being handed 10 books of hate. Google is doing a horrible, horrible job of delivering answers here.
“Good lord! That answer at the top. It’s a featured result. It’s called a “direct answer”. This is supposed to be indisputable. It’s Google’s highest endorsement. This is Google’s algorithm going terribly wrong.”

Since 2008, Google has attempted to predict what question you might be asking and offers you a choice. And this is what it did. It offered me a choice of potential questions it thought I might want to ask: “are jews a race?”, “are jews white?”, “are jews christians?”, and finally, “are jews evil?” Are Jews evil? It’s not a question I’ve ever thought of asking. I hadn’t gone looking for it. But there it was.

I press enter. A page of results appears. This was Google’s question. And this was Google’s answer: Jews are evil.

An entire page of results, nine out of 10 of which “confirm” this. The top result, from a site called Listovative, has the headline: “Top 10 Major Reasons Why People Hate Jews.” There’s one result in the 10 that offers a different point of view. It’s a link to a rather dense, scholarly book review from thetabletmag.com, a Jewish magazine, with the unfortunately misleading headline: “Why Literally Everybody In the World Hates Jews.”

And evil Jews are just the start of it. There are also evil women. I didn’t go looking for them either. This is what I type: “a-r-e w-o-m-e-n”. And Google offers me just two choices, the first of which is: “Are women evil?” I press return. Yes, they are. Every one of the 10 results “confirms” that they are.

Next I type: “a-r-e m-u-s-l-i-m-s”. And Google suggests I should ask: “Are Muslims bad?” And here’s what I find out: yes, they are. That’s what the top result says and six of the others. Without typing anything else, simply putting the cursor in the search box, Google offers me two new searches and I go for the first, “Islam is bad for society”. In the next list of suggestions, I’m offered: “Islam must be destroyed.”

Jews are evil. Muslims need to be eradicated. And Hitler? Do you want to know about Hitler? Let’s Google it. “Was Hitler bad?” I type. And here’s Google’s top result: “10 Reasons Why Hitler Was One Of The Good Guys”.

Eight out of the other 10 search results agree: Hitler really wasn’t that bad.

Google “Is Google racist?” and the featured result – the Google answer boxed out at the top of the page – is quite clear: no. It is not. Certainly the results about Google on Google don’t seem entirely neutral.

Why did my Google search return nine out of 10 search results that claim Jews are evil? We don’t know and we have no way of knowing.

in "Google, democracy and the truth about internet search" 4 dezembro 2016

Por acaso, temos. E sabemos. Enter algoritmo e tudo o resto.

The contents of a page of search results can influence people’s views and opinions (...) And people don’t question this. Google isn’t just offering a suggestion. This is a negative suggestion and we know that negative suggestions depending on lots of things can draw between five and 15 more clicks. And this all programmed. And it could be programmed differently. (...)  We are talking about the most powerful mind-control machine ever invented in the history of the human race. And people don’t even notice it.
in "Google, democracy and the truth about internet search" 4 dezembro 2016

A tecnologia como fomentadora de uma realidade alternativa, que por sua vez arrisca torna-se mais autêntica que a original.

These tools offer remarkable empowerment, but there’s a dark side to it. It enables people to do very cynical, damaging things.
Alphabet, Google’s parent company, now has the greatest concentration of artificial intelligence experts in the world. (...)  It’s able to attract the world’s top computer scientists, physicists and engineers.
in "Google, democracy and the truth about internet search" 4 dezembro 2016

Ou, o que por aqui chamamos o "fim da inocência":

The implications about the power and reach of these companies is only now seeping into the public consciousness. (...) “It’s kind of weird right now,” she says, “because people are finally saying, ‘Gee, Facebook and Google really have a lot of power’ like it’s this big revelation. And it’s like, ‘D’oh.’”
in "Google, democracy and the truth about internet search" 4 dezembro 2016

E assim entramos na parte insidiosa que nos importa, remetendo às notícias falsas, bots, e algorimos. Primeiro, o humor:

It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in "conversational understanding." The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation." Unfortunately, the conversations didn't stay playful for long. Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks (...) and Tay (...) started repeating these sentiments back to users. It's a joke, obviously, but there are serious questions to answer, like how are we going to teach AI using public data without incorporating the worst traits of humanity? If we create bots that mirror their users, do we care if their users are human trash?
in "Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day" 24 março 2016

Agora, a filha-da-putice:

How rightwing websites had spread their message:

They have created a web that is bleeding through on to our web. This isn’t a conspiracy. There isn’t one person who’s created this. It’s a vast system of hundreds of different sites that are using all the same tricks that all websites use. They’re sending out thousands of links to other sites and together this has created a vast satellite system of rightwing news and propaganda that has completely surrounded the mainstream media system.
He found 23,000 pages and 1.3m hyperlinks. And Facebook is just the amplification device. (...) The best way of describing it is as an ecosystem. This really goes way beyond individual sites or individual stories. What this map shows is the distribution network and you can see that it’s surrounding and actually choking the mainstream news ecosystem.
in "Google, democracy and the truth about internet search" 4 dezembro 2016

Se ainda não vos convencemos de tantas vezes que descartamos o $$$ da nossa framework total, repara que do outro lado não têm esse pudor:

This isn’t a byproduct of the internet. And it’s not even being done for commercial reasons. It’s motivated by ideology, by people who are quite deliberately trying to destabilise the internet.
in "Google, democracy and the truth about internet search" 4 dezembro 2016

Portanto, meus punx: quando vos incentivamos a publicar online like a muthafucka e a meter o incentivo $$$ no rabiosque? É porque a rede está na sua infância a aprender a gatinhar e os únicos professores que tem vêm com suásticas e bigodinhos ridículos... Continuando,

They try to find the tricks that will move them up Google’s PageRank system. They try and “game” the algorithm. And what his map shows is how well they’re doing that. (...) The right has colonised the digital space around these subjects – Muslims, women, Jews, the Holocaust, black people – far more effectively than the liberal left. It’s a network. It’s far more powerful than any one actor. It’s an information war.

It’s almost got a life of its own (...) and it’s learning. Every day, it’s getting stronger [:]  the more people who search for information about Jews, the more people will see links to hate sites, and the more they click on those links (very few people click on to the second page of results) the more traffic the sites will get, the more links they will accrue and the more authoritative they will appear. This is an entirely circular knowledge economy that has only one outcome: an amplification of the message. Jews are evil. Women are evil. Islam must be destroyed. Hitler was one of the good guys.
in "Google, democracy and the truth about internet search" 4 dezembro 2016

E piora. Se ainda te recordas da nossa última entrada sobre vigilância, o host parasita prossegue:

More than just spreading rightwing ideology, they are being used to track and monitor and influence anyone who comes across their content. “I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages. This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go.”
in "Google, democracy and the truth about internet search" 4 dezembro 2016

"Cambridge Analytica"? Sabemos que

  • Was employed by both the Vote Leave campaign and the Trump campaign
  • Steve Bannon, Breitbart News, chief strategist to Trump, is on Cambridge Analytica’s board
  • It has emerged that the company is in talks to undertake political messaging work for the Trump administration
  • It claims to have built psychological profiles using 5,000 separate pieces of data on 220 million American voters (...) it knows their quirks and nuances and daily habits and can target them individually.
  • Because they have so much data on individuals and they use such phenomenally powerful distribution networks, they allow campaigns to bypass a lot of existing laws.
    in "Google, democracy and the truth about internet search" 4 dezembro 2016
“The more we argue with them, the more they know about us,” he says. “It all feeds into a circular system. What we’re seeing here is new era of network propaganda.”

Enter Autenticidade - ou, uma realidade alternativa feita à tua medida e na medida que dá jeito a quem te a vende e te vende.

We don’t realise that the Facebook page we are looking at, the Google page, the ads that we are seeing, the search results we are using, are all being personalised to us. We don’t see it because we have nothing to compare it to. And it is not being monitored or recorded. It is not being regulated. We are inside a machine and we simply have no way of seeing the controls. Most of the time, we don’t even realise that there are controls.

Most of us consider the internet to be like “the air that we breathe and the water that we drink”. It surrounds us. We use it. And we don’t question it. But this is not a natural landscape. Programmers and executives and editors and designers, they make this landscape. They are human beings and they all make choices.
in "Google, democracy and the truth about internet search" 4 dezembro 2016

Algoritmo: been here + preocupação crescente:

“We need to have regular audits of these systems” (…) a growing movement of academics who are calling for “algorithmic accountability".
in "Google, democracy and the truth about internet search" 4 dezembro 2016

Se parece paranoia, não que dizer que não tenhas razões para o seres.

Robert Epstein, a research psychologist at the American Institute for Behavioural Research and Technology, and the author of the study that Google has publicly criticised show[ed] how search-rank results affect voting patterns.
in "Google, democracy and the truth about internet search" 4 dezembro 2016

E não precisas de acreditar no Epstein, acredita no Hawkins et al:

Mr. Hawking recently joined Elon Musk, Steve Wozniak, and hundreds of others in issuing a letter unveiled at the International Joint Conference last month in Buenos Aires, Argentina. The letter warns that artificial intelligence can potentially be more dangerous than nuclear weapons.

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” (...) “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
in "Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence" 19 agosto 2015

Do ano passado. Pequeno grande parêntesis sobre AI e regulatory oversights e afins deste pq é engraçado:

Google promise to form an AI ethics board to ensure the new technology was not abused. Two-and-a-half years on, however, and it is unclear whether the board has ever met, or even who is on it.
in "'Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft" 28 set 2016

Google, Facebook, Amazon, IBM and Microsoft are joining forces to create a new AI partnership dedicated to advancing public understanding of the sector
in "'Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft" 28 set 2016

Going by the unwieldy name of the Partnership on Artificial Intelligence to Benefit People and Society, the alliance isn’t a lobbying organisation (at least, it says it “does not intend” to lobby government bodies). Instead, it says it will “conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability and robustness of the technology”.
in "'Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft" 28 set 2016

Nota cínica sobre o nome escolhido: demasiado explícito para não querer ser exactamente o seu contrário, na lógica habitual típica de uma "Lei de Protecção do Ambiente" ser uma lei permissiva feita à medida de industrias poluidoras. 

Segunda nota: nesta parceria estão ausentes "dois grandes" da AI, a Apple e o OpenAi de  Elon Musk. Sobre este último e a sua mission statement, acreditamos que ele é mesmo capaz de acreditar mesmo na intenção de “advance digital intelligence in the way that is most likely to benefit humanity as a whole”. Sobre a parceria, um membro da OpenAi comentou:

“We’re happy to see the launch of the group — coordination in the industry is good for everyone. We’re looking forward to non-profits being included as first-class members in the future.”
in "'Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft" 28 set 2016

...O que nos leva a crer que non-profits não recebem ainda um tratamento de iguais nessa parceria. Sobre a Apple há desenvolvimentos frescos. Da notícia: 

Apple, which has been loudly trumpeting its own AI efforts in areas such as personal assistants, image recognition and voice control, is not included in the group. The company has a long history of going it alone.
in "'Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft" 28 set 2016

- em setembro, hoje mesmo a novidade:

Apple is finally going to start publishing its AI research (...) The Californian tech giant has traditionally kept research breakthroughs to itself, seeing any developments as valuable intellectual property (IP), so this is a major change in direction.
in "Apple is finally going to start publishing its AI research" 6 dezembro 2016

Porquê esta abertura e transparência de uma empresa que "is off the scale in terms of secrecy", mais restritiva até que um Facebook e Google?

Unlike Facebook and Google, which let employees publish their academic breakthroughs in scientific journals and on blogs, Apple prevents its staff from talking about their research both online and offline. They're allowed to attend conferences but they don't give talks about what Apple is working on and they generally only disclose their employer when they're asked to.
in "Facebook's AI director explained why some of the world's brightest minds might not want to work for Apple" 1 nov 2016

Por uma razão:

As it happens, this field of research is more important to the future of tech giants like Apple than any other.
in "Artificial Intelligence Just Broke Steve Jobs’ Wall of Secrecy" 6 dez 2016

Que se traduz em outras duas:

"Apple's closed off approach could hinder its ability to hire the best people in the field of AI"
in "Apple is finally going to start publishing its AI research" 6 dezembro 2016

"the only way he can recruit top researchers is to reassure them that once they get to Apple, they can continue to publish their work and share their ideas with the larger AI community."
in "Facebook's AI director explained why some of the world's brightest minds might not want to work for Apple" 1 nov 2016

Segunda razão e agora mais sinistra:

Deep learning, you see, requires enormous amounts of digital data, and Apple’s privacy policies could restrict how much data it can collect for training deep neural networks. But clearly, Apple is intent on embracing this data-hungry approach to AI.
in "Artificial Intelligence Just Broke Steve Jobs’ Wall of Secrecy" 6 dez 2016

Mo' data on u bitcchis!

Google and Facebook are at the forefront of AI. They are going to own the future. (...) “Politicians don’t think long term. And corporations don’t think long term because they’re focused on the next quarterly results and that’s what makes Google and Facebook interesting and different. They are absolutely thinking long term. They have the resources, the money, and the ambition to do whatever they want.
in "Google, democracy and the truth about internet search" 4 dezembro 2016
“They want to digitise every book in the world: they do it.

They understand your emotional responses and how to trigger them. They know your likes, dislikes, where you live, what you eat, what makes you laugh, what makes you cry.

Google wants to know what you want before you know yourself. “That’s the next stage,” (...) “We talk about the omniscience of these tech giants, but that omniscience takes a huge step forward again if they are able to predict.

Sobre esta capacidade de prever: recordam-se da ideia futuristica que registámos aqui? Cada vez menos se parece com ficção científica. E por falar nessa:

And now they are moving beyond the digital world into the physical. The next frontiers are healthcare, transportation, energy. And just as Google is a near-monopoly for search, its ambition to own and control the physical infrastructure of our lives is what’s coming next. It already owns our data and with it our identity. What will it mean when it moves into all the other areas of our lives?
in "Google, democracy and the truth about internet search" 4 dezembro 2016

Recordando as sugestões de pesquisa e resultados propostos com que começamos este último post, esta:

But when you move into the physical realm, and these concepts become part of the tools being deployed when you navigate around your city or influence how people are employed, I think that has really pernicious consequences.
in "Google, democracy and the truth about internet search" 4 dezembro 2016
“I would say that everybody has been really naive and we need to reset ourselves to a much more cynical place and proceed on that basis”

Mas sobre regulamentação estamos conversados. Familiar? Do debate é media/não é media:

We lack any sort of framework to deal with the potential impact of these companies on the democratic process. “We have structures that deal with powerful media corporations. We have competition laws. But these companies are not being held responsible. There are no powers to get Google or Facebook to disclose anything. There’s an editorial function to Google and Facebook but it’s being done by sophisticated algorithms. They say it’s machines not editors. But that’s simply a mechanised editorial function.”  Companies are terrified of acquiring editorial responsibilities they don’t want.
in "Google, democracy and the truth about internet search" 4 dezembro 2016

Quando e se têm esse interesse, essa ausência de regulamentação - indesejada pelos próprios, desejada pelos próprios quando consolida o seu monopólio - remenda o mal feito em retroactividade, ou seja: tarde de mais.

Technology companies (...) deal with the ethical impact of their products retrospectively (…) The Silicon Valley mantra of “fail often, fail fast” is a poor strategy when it comes to the ethical and cultural impacts of these businesses. It is equivalent to “too little, too late”, and has very high, long-term costs of global significance, in preventable or mitigable harms, wasted resources, missed opportunities, lack of participation, misguided caution and lower resilience.
in "Fake news and a 400-year-old problem: we need to resolve the ‘post-truth’ crisis" 29 nov 2016

Queremos leis restritivas? Raramente, e a excepção é a de sempre: quando as liberdades de uns são-no à custa das de outros. Corporações baseadas no “mercado” no hábito de em consequência restringir liberdades quer aos seus clientes -suckas!- quer indirectamente às de terceiros: sim, queremos. 

A lack of proactive ethics foresight thwarts decision-making, undermines management practices and damages strategies for digital innovation. (...) It would have been preventable with an ethical impact analysis that could have considered the discriminatory impact of simple, algorithmic decisions.
in "Fake news and a 400-year-old problem: we need to resolve the ‘post-truth’ crisis" 29 nov 2016
We must rebuild trust through credibility, transparency and accountability

Porque, e voltamos ao início, a culpa não é das decisões do algoritmo mas dos humanos que o guiam. De um puff-piece op-ed da Wired de hoje, intencionado a acalmar alguma histeria sobre AI que corre nas webs, o destaque que nos importa recuperar:

Artificial Intelligence Is More Artificial Than Intelligent (...) Existing technologies are not nearly advanced enough to master simple tasks on their own (...) The fact is, no existing AI technologies can master even the simplest challenges without human-provided context. So what do we mean by context? (...) Human hand-holding and “training”.
in "Artificial Intelligence Is More Artificial Than Intelligent" 7 dez 2016

Não o contradizendo, de todo, e sendo esse o caso, volta agora ao mashup da introdução. Muthafuckas.

leituras adicionais

  • ai
  • mo' racist bias

"We know that if there’s a negative autocomplete suggestion in the list, it will draw somewhere between five and 15 times as many clicks as a neutral suggestion"

Google’s search algorithm appears to be systematically promoting information that is either false or slanted with an extreme rightwing bias on subjects as varied as climate change and homosexuality. Following a recent investigation by the Observer, which found that Google’s search engine prominently suggests neo-Nazi websites and antisemitic writing, the Guardian has uncovered a dozen additional examples of biased search results.

in "How Google's search algorithm spreads false information with a rightwing bias" 16 dez 2016

  • ai
  • deep learning

"This attitude toward artificial intelligence was evolutionary rather than creationist."

Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I.

in "The Great AI awakening" 14 dez 2016

  • ai
  • facebook
  • $$$

Somewhere an AI would smile. If it had a face.

Facebook’s mission statement is to “give people the power to share and make the world more open and connected”. Rather than serving this goal, Facebook’s AIs are servicing a far older and more well-established social goal, which was designed for the betterment of mankind. This is the goal of maximising value in pursuit of economic self-interest.

Facebook, a legal person who is programmed, within our economic and legal system, with a single mandated goal: the delivery of maximum value to its shareholders. Facebook’s AIs are the technological limbs of that person, and they must ultimately reach out into the world to carry out that goal.

The unforeseen outcomes of neoliberal values coupled to highly efficient AIs are ultimately emergent effects on our social fabric. AIs “personalise” newsfeeds for you, but this personalisation isn’t really for your benefit, it’s to place you in a tighter demographic category, making you a more targeted value proposition to sell to advertisers. The effect of this is that we have all been herded into digital echo chambers.

And it is doing a great job. Through the election cycle Facebook’s stock rose 924% faster than the US stock market index

in "Click here for the AI apocalypse (brought to you by Facebook)" 23 nov 2016

  • ai
  • fake news
In the coming decade, Al-powered smart filters developed by technology companies will weigh the legitimacy of information before audiences ever get a chance to determine it for themselves.

News is the fabric that weaves together our realities, and Google, Facebook, Twitter –  through always-on phone screens, activity trackers, and 24/7 GPS and indoor Bluetooth trails – represents our interface with this brave new world.

The industry’s filtering response to fake news could signal the end of legitimate news outlets that make an effort to draw attention to issues they feel are underrepresented or intentionally suppressed by the mainstream media. (...) Fake news is a lot like pornography  –  especially in terms of how gatekeepers classify certain content (and known sources of content) they deem unsuitable for their audiences.

I can see where we might be headed: the suppression of alternative voices and the censorship of content that addresses certain issues. (...) The filters in the future won’t be programmed to ban pornographic content, or prevent user harassment and abuse. The next era of the infowars is likely to result in the most pervasive filter yet: it’s likely to normalise the weeding out of viewpoints that are in conflict with established interests.

in "Stop worrying about fake news. What comes next will be much worse" 9 dez 2016

  • ai
  • $$$
  • luddites

As more jobs are automated, this trend seems likely to continue.

So who is right: the pessimists (many of them techie types), who say this time is different and machines really will take all the jobs, or the optimists (mostly economists and historians), who insist that in the end technology always creates more jobs than it destroys?

What determines vulnerability to automation, experts say, is not so much whether the work concerned is manual or white-collar but whether or not it is routine (...) people [que] work in creative fields less susceptible to automation.

“Job polarisation”: the workforce bifurcates into two groups doing non-routine work: highly paid, skilled workers (such as architects and senior managers) on the one hand and low-paid, unskilled workers (such as cleaners and burger-flippers) on the other.

in "Will smarter machines cause mass unemployment? " 25 junho 2016

pobres pobres ricos