Let’s begin by operationalizing three important and constantly abused terms. Abused mostly by politicians and media with full understanding of the nature of the abuse.
Propaganda: Fairly obvious government messaging that any reasonably thoughtful or aware person can spot.
Disinformation: Deliberate untruths or rhetorical devices intended to harm the target. The target can be an entire population.
Misinformation: The repetition of disinformation by a naive target that believes it and passes it on as truth.
For a sterling example of disinformation, watch Justin Trudeau use that word. He actually uses the word “Disinformation” as disinformation. And I’m sure he thinks that’s clever.
Disinformation has a lot of tactics as well. They can be subtle and hard to spot if you don’t already know they are tactics. One example might be where there appears to be a government leak and the information appears to hurt the government that leaked it but it contains a key bit of disinformation intended to create the behaviour they want. One example might be the leak from early in the Covid years where a Canadian government memo was said to have leaked and talked about building effectively gulags. I don’t remember all the details but I should. This site published it as a maybe-it’s-true item. And it came from a great source within the government but I think as misinformation, not deliberate misdirection.
This site covered a more blatant form of disinformation a few years ago. It is when it fabricated a motive for a leftist Trump hater at a performance of Fidler on the Roof, claiming that the man was a “trump supporter”. Global news was in possession of the facts when they wrote the article claiming he was a Trump supporter, and they did not update the article to correct it to the truth for weeks which is as long as I bothered to check up on it.
The same day the man stood up and drunkenly did a Nazi salute at the play, right before intermission, video was circulating about how he explained to the police in the lobby of the theatre right there at intermission, that he hated Trump so much that the evil doers in the play reminded him of Trump and Trump is like Hitler, so he gave the Nazi salute. Global knew the truth. They could not have not known it as anyone who read the source news would have seen it.
1 of 2: This is Satchel Tate from Nova Scotia. During a game last week, playing for @HPAsBaseball 13U, he had a stroke. His recovery will be lengthy and difficult, so I’m asking for your help; not with money, but with a card or letter. pic.twitter.com/4w7liHcfze
— Jamie Campbell (@SNETCampbell) August 6, 2022
Before I post the excerpt in question, ask yourself what you would want to know most about the story. What would the single burning question be, not just since Covid, not just since the Vaxx rollout, but at any point, what would you want to know about the story? How his family felt about him? How his team mates felt? A tribute by the Toronto Blue Jays? Or what on God’s green earth would cause a 12 year old who is a healthy athlete to suffer a crippling stroke?
Now add in an experimental gene-therapy that all kids have to take in order to play any form of organized sports and ask the same question. A gene-therapy that is known without argument at this point, to cause blood clots. Especially in athletic young men. So much so that Denmark has now banned the injections for people under 18 years old.
The article gives a lot of nicey fluff about how people feel, although none of it is anger, and then goes on to say this:
Notice how they slipped that in.
For the team, a team that all would have had to take the same injections in order to play, “the cause is the least of their concerns”.
Whoever did that interview was more lawyer than journalist. And I bet I know who has that lawyer on retainer.
But given Global’s track record of outright lies to suit a political narrative we probably can’t know what they really said. And given how the media treats any statement about the injections that may cause people to question taking it, its also easy to see this piece for what it is.
Outright Soviet level propaganda, and disinformation.
And the harm it does and to who?
I’m sure we can all figure that out.
Meanwhile, in Clayton Valley USA, another 12 year old athlete dies due to a “severe medical emergency”.
Two days after he was brought to the hospital, he passed away.
“With deep sadness and heavy hearts, we inform you of the passing of our jr. varsity football athlete. As many of you are aware our jr. varsity athlete unexpectedly experienced a medical emergency unrelated to the sport of football Friday during our evening football practice,” Clayton Valley Athletic Association announced on Sunday.
Eeyore for VladTepesBlog
Even if one disregards the mountain of evidence indicating the vaccines as the cause, any rational being would be curious regarding the cause of these unusual events.
And yet, nothing but crickets. Healthy doctors, pro athletes, young children…. dropping dead in unusual numbers and supposedly no one is in the least bit curious. No inquiries. No autopsies. No nothing. How odd.
World Economic Forum – CYBERSECURITY
The solution to online abuse? AI plus human intelligence
-Bad actors perpetrating online harms are getting more dangerous and sophisticated, challenging current trust and safety processes.
-Existing methodologies, including automated detection and manual moderation, are limited in their ability to adapt to complex threats at scale.
-A new framework incorporating the strengths of humans and machines is required.
With 63% of the world’s population online, the internet is a mirror of society: it speaks all languages, contains every opinion and hosts a wide range of (sometimes unsavoury) individuals.
As the internet has evolved, so has the dark world of online harms. Trust and safety teams (the teams typically found within online platforms responsible for removing abusive content and enforcing platform policies) are challenged by an ever-growing list of abuses, such as child abuse, extremism, disinformation, hate speech and fraud; and increasingly advanced actors misusing platforms in unique ways.
The solution, however, is not as simple as hiring another roomful of content moderators or building yet another block list. Without a profound familiarity with different types of abuse, an understanding of hate group verbiage, fluency in terrorist languages and nuanced comprehension of disinformation campaigns, trust and safety teams can only scratch the surface.
A more sophisticated approach is required. By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision.
Online harms are becoming more complex
Since the introduction of the internet, wars have been fought, recessions have come and gone and new viruses have wreaked havoc. While the internet played a vital role in how these events were perceived, other changes – like the radicalization of extreme opinions, the spread of misinformation and the wide reach of child sexual abuse material (CSAM) – have been enabled by it.
Online platforms’ attempts to stop these abuses have led to a Roadrunner meets Wile E. Coyote-like situation, where threat actors use increasingly sophisticated tactics to avoid evolving detection mechanisms. This has resulted in the development of new slang, like child predators referring to “cheese pizza” and other terms involving the letters c and p instead of “child pornography”. New methodologies are employed, such as using link shorteners to hide a reference to a disinformation website; and abuse tactics, such as the off-platform coordination of attacks on minorities.
Traditional methods aren’t enough
The basis of most harmful content detection methods is artificial intelligence (AI). This powerful technology relies on massive training sets to quickly identify violative behaviours at scale. Built on data sets of known abuses in familiar languages means AI can detect known abuses in familiar languages, but it is less effective at detecting nuanced violations in languages it wasn’t trained on – a gaping hole of which threat actors can take advantage.
While providing speed and scale, AI also lacks context: a critical component of trust and safety work. For example, robust AI models exist to detect nudity but few can discern whether that nudity is part of a renaissance painting or a pornographic image. Similarly, most models can’t decipher whether the knife featured in a video is being used to promote a butcher’s equipment or a violent attack. This lack of context may lead to over-moderating, limiting free speech on online platforms; or under-moderating, which is a risk to user safety.
In contrast to AI, human moderators and subject-matter experts can detect nuanced abuse and understand many languages and cultures. This precision, however, is limited by the analyst’s specific area of expertise: a human moderator who is an expert in European white supremacy won’t necessarily be able to recognize harmful content in India or misinformation narratives in Kenya. This limited focus means that for human moderators to be effective, they must be part of large, robust teams – a demanding effort for most technology companies.
The human element should also not be ignored. The thousands of moderators tasked with keeping abhorrent content offline must witness it themselves, placing them at high risk of mental illness and traumatic disorders. Beyond care for moderators, this situation may limit the operation’s effectiveness, as high churn and staffing instabilities lead to low organizational stability and inevitable moderation mistakes.
The “Trust & Safety” intelligent solution
While AI provides speed and scale and human moderators provide precision, their combined efforts are still not enough to proactively detect harm before it reaches platforms. To achieve proactivity, trust and safety teams must understand that abusive content doesn’t start and stop on their platforms. Before reaching mainstream platforms, threat actors congregate in the darkest corners of the web to define new keywords, share URLs to resources and discuss new dissemination tactics at length. These secret places where terrorists, hate groups, child predators and disinformation agents freely communicate can provide a trove of information for teams seeking to keep their users safe.
The problem is that accessing this information is in no way scalable. Classic intelligence collection requires deep research, expertise, access and a fair amount of assimilation skills – human capacities that cannot be mimicked by a machine.
Baking in intelligence
We’ve established that the standard process of AI algorithms for scale and human moderators for precision doesn’t adequately balance scale, novelty and nuance. We’ve also established that off-platform intelligence collecting can provide context and nuance, but not scale and speed.
To overcome the barriers of traditional detection methodologies, we propose a new framework: rather than relying on AI to detect at scale and humans to review edge cases, an intelligence-based approach is crucial.
By bringing human-curated, multi-language, off-platform intelligence into learning sets, AI will then be able to detect nuanced, novel abuses at scale, before they reach mainstream platforms. Supplementing this smarter automated detection with human expertise to review edge cases and identify false positives and negatives and then feeding those findings back into training sets will allow us to create AI with human intelligence baked in. This more intelligent AI gets more sophisticated with each moderation decision, eventually allowing near-perfect detection, at scale.
The outcome
The lag between the advent of novel abuse tactics and when AI can detect them is what allows online harms to proliferate. Incorporating intelligence into the content moderation process allows teams to significantly reduce the time between when new abuse methods are introduced and when AI can detect them. In this way, trust and safety teams can stop threats rising online before they reach users.
https://www.weforum.org/agenda/2022/08/online-abuse-artificial-intelligence-human-input
https://www.youtube.com/watch?v=V3cwtppqZhs
=========================
World Economic Forum – Is AI the only antidote to disinformation?
The stability of our society is more threatened by disinformation than anything else we can imagine. It is a pandemic that has engulfed small and large economies alike. People around the world face threats to life and personal safety because of the volumes of emotionally charged and socially divisive pieces of misinformation, much of it fuelled by emerging technology. This content either manipulates the perceptions of people or propagates absolute falsehoods in society.
AI-based programmes are being used to create deep fakes of political leaders by adapting video, audio and pictures. Such deep fakes can be used to sow the seeds of discord in society and create chaos in markets. AI is also getting better at generating human-like content using language models such as GPT-3 that can author articles, poems and essays based on a single-line prompt. Doctoring of all types of content has been made so seamless by AI that open-source software like FaceSwap and DeepFaceLab can enable even discreet amateurs to be epicentres of social disharmony. In a time when humans can no longer comprehend where to place their trust, “technology for good” looks to be the only saviour.
Russia and Iran are Meta’s top source of disinformation.
Semantic analytics for basic filtering of disinformation
The very first idea that comes to mind to combat disinformation with technology is content analytics. AI-based tools can perform linguistic analysis of textual content and detect cues including word patterns, syntax construction and readability, to differentiate computer-generated content from human-produced text. Such algorithms can take any piece of text and check for word vectors, word positioning and connotation to identify traces of hate speech. Moreover, AI algorithms can reverse engineer manipulated images and videos to detect deep fakes and highlight content that needs to be flagged.
But that’s not enough: generative adversarial networks are becoming so sophisticated that algorithms will soon produce content that is indistinguishable from that produced by humans. To add to these woes, such semantic analysis algorithms cannot interpret content inside hate speech images that have not been manipulated but rather are shared with the wrong or malicious context or additional content. It also cannot check if the claims made by some pieces of content are false. Linguistic barriers also add to the challenges. Basically, the sentiment of the online post can be assessed, not its veracity. This is where human intervention is required with AI.
Root tracing: the next-level cop
Fake news has often been found to share the same root – the place of origin before the spread of the news. The Fandango project, for example, uses stories that have been flagged as fake by human fact-checkers and then searches for social media posts or online pages that have similar words or claims. This allows the journalists and experts to trace the fake stories to their roots and weed out all the potential threats before they can spread out of control.
Services such as Politifact, Snopes and FactCheck employ human editors who can perform the primary research required for verifying the authenticity of a report or an image. Once a fake is found, AI algorithms help to crawl through the web and counter similar pieces of content that can foment social discord. If the content is found to be genuine, a reputation score can be assigned to the website article. The Trust Project uses parameters such as sources, references, ethical standards and corrections to assess the credibility of news outlets.
With the mushrooming volume of fake news and hate speech and the rapid spread of such content on social networks, relying on human fact-checkers is not sufficient. This process also has a bias since what is offensive and socially divisive is dependent on the subjective opinion of the human fact-checkers. For example, an allegedly acrimonious news article may talk about a genuine incident but with emotionally charged language, which may suit the views of one human checker but not the other. Therefore, such a filtering method can help establish the veracity of content, but not its sentiments.
Spread analysis to arrest propagation
There is a marked difference between the way fake news and genuine news travel over social networks. Researchers from MIT suggest that fake news travels six times faster than genuine news to reach 1,500 people on Twitter. Moreover, the chain length of genuine news (the number of people who have propagated a social media post) was never above 10 but rose to 19 for fake news. This is partly because of swarms of bots deployed by malicious elements to make fake stories go viral.
Humans are equally responsible, as people usually share fake news faster without much critical thinking or a sense of judgment. GoodNews uses an AI engine to identify fake news using engagement metrics, as fake news shows more shares than likes, compared to vice versa for genuine news. Such techniques to capture suspicious content based on its spread can help prevent radicalization.
Humans at the core
Technology use is a reactionary step when the world needs a proactive approach to combat disinformation. AI won’t be successful alone unless we educate the masses – especially the youth – to be vigilant of disinformation. Some schools in India are teaching critical thinking methods and inculcating fact-checking habits in secondary school students. Fake news is not a matter of mere algorithms, but of the philosophy behind how we deal with knowledge – good or bad. Communities of informed users can contribute to ethical monitoring activities, while the crowdsourcing of collaborative knowledge among professional organizations is crucial to verifying raw information.
Humanizing the approach to combating disinformation must be the highest priority in order to build a well-informed society of critical thinkers. 🙂 🙂 🙂
A lack of proactive measures involving all stakeholders can lead to the rapid erosion of trust in media and institutions, which is a precursor to anarchy. Until humans learn to objectively evaluate online content, AI-based technologies have to be our ally in combatting disinformation online.
https://www.weforum.org/agenda/2022/07/disinformation-ai-technology
I was at the grocery store the other day and, to my great surprise, there were some ten people waiting in line to lather their hands with the sanitizer, and more adding to the line as I passed the entrance gate. The high majority were not masked up. This brought home to me the high number of people who are on the way to becoming robots. The WEF is winning.
I can’t believe no parent of these two 12-year-olds isn’t questioning what happened to their son. Nor any of the parents on the teams. These parents will have to live with their doubts and guilt feelings which I’m certain they have.
They can also be in denial mode to push out any feelings of guilt.
Eeyore: ‘Propaganda’ and ‘Conspiracy’ are also words frequently used.