Neutrality Act was signed February 26th, 2015.... and this
announcement comes out 2 days later. Coincidence? Yea.... I think
not!!
Google knew they had no way of pushing this through without the Net Neutrality Act being passed by the FCC. They were just waiting, I guarantee it.
Time to get off our asses again and let Google know that we will NOT allow this to happen!!
And you know, this isn't just in the interests of the "conspiracy theorists" to take a stand. What about various Christian groups who staunchly believe that the world is less than 6000 years old? Will Google throw their articles to the bottom of the rankings because the Google Guru Guys believe in Evolution and the "Big Bang" theory? What about the political ramifications of a Republican Google Guru Guy tossing the rankings of a hard hitting Democrat website to the bottom of the pile? Or how about Corporate competition? What happens if Yahoo.com writes up a scathing article about Google.com? To the bottom they go!!!
What constitutes "The Truth"? WHO'S TRUTH? And will they have panel in place to deal with arguments and appeals?
..... So If I post an article saying "Vaccines are Toxic and Cause the Spread of the Diseases themselves" and Google drops it to page 120,479 of the search parameters, Do I get to appeal their decision? And when I dump about 45 reading hours worth of studies, published papers, and signed affidavits by medical professionals supporting my statements in the article.... Will the Google Guru Guys actually admit that they were wrong? And what compensation will they offer to me for the libelous action of calling my article "untruthful" when in fact I am publishing the truth? Can I then Sue them for defamation of character? Libel? Slander? Emotional Distress due to the trauma they inflicted on me, by calling me a Liar? Loss of Income due to less people clicking on my article because their hired Google Guru Guys deliberately buried it on page 120,479 of their Google Gestapo Search listings?
I tell ya what Google. How about we make a deal? I'll publish my articles, and if you classify them as "Untruthful" and take actions against my current rankings due to the biased opinions of your Google Guru Guys, and I PROVE THAT YOU ARE WRONG, AND I AM RIGHT..... You owe me a million bucks.
....'cause that's where this is all going to go, very quickly.
d
ps: as usual, my comments and highlights in blue
Beyond the articles below, here is a few more articles on the topic:
Search Engine giant Google, the major driver of traffic to the
majority of media portals is moving to change the way it ranks websites,
declaring that it intends to use known partisan debunking outlets to determine the “truthfulness” of content.
Currently, Google rankings are determined by the number of incoming links to a web page, meaning that if a story becomes popular it can be driven to the top of search results, and by viewed by millions of people.
However, this is a little too democratic for the liking of some, who only like to get their “facts” from pre-approved sources.
The proposed solution, according to a Google funded research team is to compute a“Knowledge-Based Trust score” for every web page, based on Google’s own “Knowledge Vault”, an automated database that determines “facts the web unanimously agrees on,” according to the New Scientist.
“A source that has few false facts is considered to be trustworthy,” says the research team.
In short, any web pages that provide information that contradicts or questions Google’s own established “truth”, will be bumped down the rankings.
The search giant is automatically building Knowledge Vault, a
massive database that could give us unprecedented access to the world's
facts
Knowledge vault...
Google knew they had no way of pushing this through without the Net Neutrality Act being passed by the FCC. They were just waiting, I guarantee it.
Time to get off our asses again and let Google know that we will NOT allow this to happen!!
And you know, this isn't just in the interests of the "conspiracy theorists" to take a stand. What about various Christian groups who staunchly believe that the world is less than 6000 years old? Will Google throw their articles to the bottom of the rankings because the Google Guru Guys believe in Evolution and the "Big Bang" theory? What about the political ramifications of a Republican Google Guru Guy tossing the rankings of a hard hitting Democrat website to the bottom of the pile? Or how about Corporate competition? What happens if Yahoo.com writes up a scathing article about Google.com? To the bottom they go!!!
What constitutes "The Truth"? WHO'S TRUTH? And will they have panel in place to deal with arguments and appeals?
..... So If I post an article saying "Vaccines are Toxic and Cause the Spread of the Diseases themselves" and Google drops it to page 120,479 of the search parameters, Do I get to appeal their decision? And when I dump about 45 reading hours worth of studies, published papers, and signed affidavits by medical professionals supporting my statements in the article.... Will the Google Guru Guys actually admit that they were wrong? And what compensation will they offer to me for the libelous action of calling my article "untruthful" when in fact I am publishing the truth? Can I then Sue them for defamation of character? Libel? Slander? Emotional Distress due to the trauma they inflicted on me, by calling me a Liar? Loss of Income due to less people clicking on my article because their hired Google Guru Guys deliberately buried it on page 120,479 of their Google Gestapo Search listings?
I tell ya what Google. How about we make a deal? I'll publish my articles, and if you classify them as "Untruthful" and take actions against my current rankings due to the biased opinions of your Google Guru Guys, and I PROVE THAT YOU ARE WRONG, AND I AM RIGHT..... You owe me a million bucks.
....'cause that's where this is all going to go, very quickly.
d
ps: as usual, my comments and highlights in blue
Beyond the articles below, here is a few more articles on the topic:
The truth, according to Google
In charge of truth? Google considers ranking sites on facts, not popularity
Google Moving to Shut Down Alternative Media by Ranking Sites on “Facts” Rather than Popularity
Wants To Cross Check Websites Against Debunking Sites Such As Snopes.com
Currently, Google rankings are determined by the number of incoming links to a web page, meaning that if a story becomes popular it can be driven to the top of search results, and by viewed by millions of people.
However, this is a little too democratic for the liking of some, who only like to get their “facts” from pre-approved sources.
The proposed solution, according to a Google funded research team is to compute a“Knowledge-Based Trust score” for every web page, based on Google’s own “Knowledge Vault”, an automated database that determines “facts the web unanimously agrees on,” according to the New Scientist.
“A source that has few false facts is considered to be trustworthy,” says the research team.
In short, any web pages that provide information that contradicts or questions Google’s own established “truth”, will be bumped down the rankings.
In addition, some of those working on “truthfulness” ranking
technology have expressed a desire to verify or rebut web pages by
cross-referencing them to other sources, such as Snopes, PolitiFact and
FactCheck.org. (Are you Freakin' Kidding?!>) These websites exist and profit directly from debunking
anything and everything. What’s more, they have been previously exposed as highly partisan.
Continue Reading HERE
Google wants to rank websites based on facts not links
- 28 February 2015 by Hal Hodson
THE internet is stuffed with garbage.
Anti-vaccination websites make the front page of Google, and fact-free
"news" stories spread like wildfire. Google has devised a fix – rank
websites according to their truthfulness.
Google's search engine currently uses the
number of incoming links to a web page as a proxy for quality,
determining where it appears in search results. So pages that many other
sites link to are ranked higher. This system has brought us the search
engine as we know it today, but the downside is that websites full of
misinformation can rise up the rankings, if enough people link to them.
A Google research team is adapting that
model to measure the trustworthiness of a page, rather than its
reputation across the web. Instead of counting incoming links, the
system – which is not yet live – counts the number of incorrect facts
within a page. "A source that has few false facts is considered to be
trustworthy," says the team (arxiv.org/abs/1502.03519v1). The score they compute for each page is its Knowledge-Based Trust score.
The software works by tapping into the Knowledge Vault,
the vast store of facts that Google has pulled off the internet. Facts
the web unanimously agrees on are considered a reasonable proxy for
truth. Web pages that contain contradictory information are bumped down
the rankings.
There are already lots of apps that try to help internet users unearth the truth. LazyTruth is a browser extension that skims inboxes to weed out the fake or hoax emails that do the rounds. Emergent,
a project from the Tow Center for Digital Journalism at Columbia
University, New York, pulls in rumours from trashy sites, then verifies
or rebuts them by cross-referencing to other sources.
LazyTruth developer Matt Stempeck, now the
director of civic media at Microsoft New York, wants to develop
software that exports the knowledge found in fact-checking services such
as Snopes, PolitiFact and FactCheck.org
so that everyone has easy access to them. He says tools like LazyTruth
are useful online, but challenging the erroneous beliefs underpinning
that information is harder. "How do you correct people's misconceptions?
People get very defensive," Stempeck says. "If they're searching for
the answer on Google they might be in a much more receptive state."
Original Article HERE
Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources
(Submitted on 12 Feb 2015)
The quality of web sources has been traditionally evaluated using exogenous signals such as the hyperlink structure of the graph. We propose a new approach that relies on endogenous signals, namely, the correctness of factual information provided by the source. A source that has few false facts is considered to be trustworthy. The facts are automatically extracted from each source by information extraction methods commonly used to construct knowledge bases. We propose a way to distinguish errors made in the extraction process from factual errors in the web source per se, by using joint inference in a novel multi-layer probabilistic model. We call the trustworthiness score we computed Knowledge-Based Trust (KBT). On synthetic data, we show that our method can reliably compute the true trustworthiness levels of the sources. We then apply it to a database of 2.8B facts extracted from the web, and thereby estimate the trustworthiness of 119M webpages. Manual evaluation of a subset of the results confirms the effectiveness of the method.
Continue reading, and/or download original PDF HERE
Google's fact-checking bots build vast knowledge bank
- 20 August 2014 by Hal Hodson
- Magazine issue 2983. Subscribe and save
GOOGLE is building the largest store of knowledge in human history – and it's doing so without any human help.
Instead, Knowledge Vault autonomously
gathers and merges information from across the web into a single base of
facts about the world, and the people and objects in it.
The breadth and accuracy of this gathered
knowledge is already becoming the foundation of systems that allow
robots and smartphones to understand what people ask them. It promises
to let Google answer questions like an oracle rather than a search
engine, and even to turn a new lens on human history.
Knowledge Vault is a type of "knowledge
base" – a system that stores information so that machines as well as
people can read it. Where a database deals with numbers, a knowledge
base deals with facts. When you type "Where was Madonna born" into
Google, for example, the place given is pulled from Google's existing
knowledge base.
This existing base, called Knowledge Graph, relies
on crowdsourcing to expand its information. But the firm noticed that
growth was stalling; humans could only take it so far.
So Google decided it needed to automate
the process. It started building the Vault by using an algorithm to
automatically pull in information from all over the web, using machine
learning to turn the raw data into usable pieces of knowledge.
Knowledge Vault has pulled in 1.6 billion
facts to date. Of these, 271 million are rated as "confident facts", to
which Google's model ascribes a more than 90 per cent chance of being
true. It does this by cross-referencing new facts with what it already
knows.
"It's a hugely impressive thing that they
are pulling off," says Fabian Suchanek, a data scientist at Télécom
ParisTech in France.
Google's Knowledge Graph is currently
bigger than the Knowledge Vault, but it only includes manually
integrated sources such as the CIA Factbook.
Knowledge Vault offers Google fast,
automatic expansion of its knowledge – and it's only going to get
bigger. As well as the ability to analyse text on a webpage for facts to
feed its knowledge base, Google can also peer under the surface of the
web, hunting for hidden sources of data such as the figures that feed
Amazon product pages, for example.
Tom Austin, a technology analyst at
Gartner in Boston, says that the world's biggest technology companies
are racing to build similar vaults. "Google, Microsoft, Facebook, Amazon
and IBM are all building them, and they're tackling these enormous
problems that we would never even have thought of trying 10 years ago,"
he says.
The potential of a machine system that has
the whole of human knowledge at its fingertips is huge. One of the
first applications will be virtual personal assistants that go way beyond what Siri and Google Now are capable of, says Austin.
"Before this decade is out, we will have a
smart priority inbox that will find for us the 10 most important emails
we've received and handle the rest without us having to touch them,"
Austin says. Our virtual assistant will be able to decide what matters
and what doesn't.
Other agents will carry out the same
process to watch over and guide our health, sorting through a knowledge
base of medical symptoms to find correlations with data in each person's
health records. IBM's Watson is already doing this for cancer at Memorial Sloan Kettering Hospital in New York.
Knowledge Vault promises to supercharge
our interactions with machines, but it also comes with an increased
privacy risk. The Vault doesn't care if you are a person or a mountain –
it is voraciously gathering every piece of information it can find.
"Behind the scenes, Google doesn't only
have public data," says Suchanek. It can also pull in information from
Gmail, Google+ and YouTube. (wanna bet that you can add Facebook, Twitter, and Linked in, et al to that list?) "You and I are stored in the Knowledge Vault
in the same way as Elvis Presley," Suchanek says. Google disputes this,
however. In an email to New Scientist, a company spokesperson said, "The Knowledge Vault does not deal with personal information." (Yea, Right!)
Google researcher Kevin Murphy and his
colleagues will present a paper on Knowledge Vault at the Conference on
Knowledge Discovery and Data Mining in New York on 25 August.
As well as improving our interactions with
computers, large stores of knowledge will be the fuel for augmented
reality, too. Once machines get the ability to recognise objects,
Knowledge Vault could be the foundation of a system that can provide
anyone wearing a heads-up display with information about the landmarks,
buildings and businesses they are looking at in the real world.
"Knowledge Vault adds local entities – politicians, businesses. This is
just the tip of the iceberg," Suchanek says.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.