yungterra:

gary_buSAYITWATCHIT.png(s)

malformalady:

A Chinese businessman hid two illegally built extra storeys on his penthouse suite with trees and plants. The penthouse already had a roofgarden, but it was increased in size to deceive neighbours and officials.

malformalady:

A Chinese businessman hid two illegally built extra storeys on his penthouse suite with trees and plants. The penthouse already had a roofgarden, but it was increased in size to deceive neighbours and officials.

Marvelous.

Marvelous.

Yo, Grammar: What’s up with “past” and “passed”?

theyuniversity:

image

imageimageimage

Nothing beats getting Sriracha sauce after passing a final exam. Right, Freddie?

image

(Freddie Mercury GIF source: Vita In Pillole)

TL;DR - Biometric surveillance is being used to not only track individuals, but to profile them as well. Legal loopholes (e.g. there is no law stopping the government from accessing your webcam footage) and gray areas (e.g. driver’s license photos being analyzed in criminal cases) make accountability unrealistic. Federal law enforcement agencies are working with commercial enterprises to create massive databases that can absorb tens of thousands of new profiles per day (and several state agencies have already handed over what biometric data they’ve collected so far). We’re heading into a surveillance state in which all of us are passively monitored at all times, with that information being stored away in some database for later recall. [An interesting note from this article is that Janice Kephart, founder of the biometrics lobbying group SIBA and the most interesting character in this article, previously served as counsel to the 9/11 Commission. Take that as you will.]

Biometric Surveillance Means Someone Is Always Watching
By Kyle Chayka / April 17, 2014 6:06 AM EDT
Incrimination by selfie can happen.  
From 2008 to 2010, as Edward Snowden has revealed, the National Security Agency (NSA) collaborated with the British Government Communications Headquarters to intercept the webcam footage of over 1.8 million Yahoo users.
The agencies were analyzing images they downloaded from webcams and scanning them for known terrorists who might be using the service to communicate, matching faces from the footage to suspects with the help of a new technology called face recognition.
The outcome was pure Kafka, with innocent people being caught in the surveillance dragnet. In fact, in attempting to find faces, the Pentagon’s Optic Nerve program recorded webcam sex by its unknowing targets—up to 11 percent of the material the program collected was “undesirable nudity” that employees were warned not to access, according to documents. And that’s just the beginning of what face recognition technology might mean for us in the digital era.
Over the past decade, face recognition has become a fast-growing commercial industry, moving from its governmental origins—programs like Optic Nerve—into everyday life. The technology is being pitched as an effective tool for securely confirming identities, with the financial backing of a new Washington lobbying firm, the Secure Identity & Biometrics Association (SIBA).
To some, face recognition sounds benign, even convenient. Walk up to the international checkpoint in a German airport, gaze up at a camera, and walk into the country without ever needing to pull out a passport—your image is on file, the camera knows who you are. Wander into a retail store and be greeted with personalized product suggestions—the store’s network has a record of what you bought last time. Facebook already uses face recognition to recommend which friends to tag in your photos.
But the technology has a dark side. The U.S. government is in the process of building the world’s largest cache of face recognition data, with the goal of identifying every person in the country. The creation of such a database would mean that anyone could be tracked wherever his or her face appears, whether it’s on a city street or in a mall. Today’s laws don’t protect Americans from having their webcams scanned for facial data.

   Security CCTV. Peter Marlow/Magnum    
Not That Perfect
Face recognition systems have two components: an algorithm and a database. The algorithm is a computer program that takes an image of a face and deconstructs it into a series of landmarks and proportional patterns—the distance between eye centers, for example. This process of turning unique biological characteristics into quantifiable data is known as biometrics.
Together, the facial data points create a “face-print” that, like a fingerprint, is unique to each individual. Some faces are described as open books; at a glance, a person can be “read.” Face recognition technology makes that metaphor literal. “We can extrapolate enough data from the eye and nose region, from ear to ear, to build a demographic profile,” including an individual’s age range, gender and ethnicity, says Kevin Haskins, a business development manager at the face recognition company Cognitec.
Face-prints are collected into databases, and a computer program compares a new image or piece of footage with the database for matches. Cognitec boasts a match accuracy rate of 98.75 percent, an increase of over 20 percent over the past decade. Facebook recently achieved 97.25 percent accuracy after acquiring biometrics company Face.com in 2012.
So far, the technology has its limits. “The naive layman thinks face recognition is out there and can catch you anytime, anywhere, and your identity is not anonymous anymore,” says Paul Schuepp, the co-founder of Animetrics, a decade-old face recognition company based in New Hampshire. “We’re not that perfect yet.”
The lighting and angle of faces images must be strictly controlled to create a usable face-print. Enrollment is the slightly Orwellian industry term for making a print and entering an individual into a face recognition database. “Good enrollment means getting a really good photograph of the frontal face, looking straight on, seeing both eyes and both ears,” Schuepp explains.

How face recognition is already being used hints at just how pervasive it could become. It’s being used on military bases to control who has access to restricted areas. In Iraq and Afghanistan, it was used to check images of detainees in the field against Al-Qaeda wanted lists. The Seattle police department is already applying the technology to identify suspects on video footage.
The technology’s presence is subtle, and as it gets integrated into devices we already use, it will be easy to overlook. The most dystopian example might be NameTag, a startup that launched in February promising to embed face recognition in wearable computers like Google Glass. The software would allow you to look across a crowded bar and identify the anonymous cutie you are scoping out. The controversial company also brags that its product can identify sex offenders on sight.
As the scale of face recognition grows, there’s a chance it could take its place in the technological landscape as seamlessly as the iPhone. But to allow that to happen would mean ignoring the increasing danger that it will be misused.

   Monitors show imagery from security cameras seen at the Lower Manhattan Security Initiative on April 23, 2013 in New York, NY. At the counter-terrorism center, police and private security personnel monitor more than 4,000 surveillance cameras and license plate readers mounted around the Financial District and surrounding parts of Lower Manhattan. credit: John Moore/Getty    
Inescapable Security Net
By licensing their technology to everyone from military defense contractors to Internet start-ups, companies like Cognitec and Animetrics are churning a global biometrics industry that will grow to $20 billion by 2020, according to Janice Kephart, the founder of SIBA. With funding from a coalition of face recognition businesses, SIBA launched in February 2014 to “educate folks about the reality of biometrics, bridging the gap between Washington and the industry,” says Kephart, who previously worked as a counsel to the 9/11 Commission. “The Department of Homeland Security hasn’t done anything on this for 16 years. America is falling way behind the rest of the world.”
Kephart believes biometric technology could have prevented the 9/11 attacks (which she says “caused a surge” in the biometrics industry) and Edward Snowden’s NSA leaks. She emphasizes the technology’s protective capabilities rather than its potential for surveillance. “Consumers will begin to see that biometrics delivers privacy and security at the same time,” she says.
It’s this pairing of seeming opposites that makes face recognition so difficult to grapple with. By identifying individuals, it can prevent people from being where they shouldn’t be. Yet the profusion of biometrics creates an inescapable security net with little privacy and the potential for serious mistakes with dire consequences. An error in the face recognition system could cause the ultimate in identity theft, with a Miley Cyrus look-alike dining on Miley’s dime or a hacker giving your digital passport (and citizenship) to a stranger.
Some in government express concern over the potential for abuse. U.S. Senator Al Franken, D-Minn., has become a leading figure in the debate, noting in 2013 that face recognition “has profound implications for privacy”—namely, that there won’t be any. In a February 2014 letter to NameTag, he urged the company to delay its product “until best practices for facial recognition technology are established.”
Franken’s suggestion points out the biggest problem with face recognition’s future. Legal boundaries for the technology have not been set; we know that public face recognition data is being collected, but we don’t know how it is being accessed or used.
Contrary to Kephart’s assertions, the federal government has been quite busy with biometrics. This summer, the FBI is focusing on face recognition with the fourth step of its Next Generation Identification (NGI) program, a $1.2 billion initiative launched in 2008 to build the world’s largest biometric database. By 2013, the database held 73 million fingerprints, 5.7 million palm prints, 8.1 million mug shots and 8,500 iris scans. Interfaces to access the system are being provided free of charge to local law enforcement authorities.
Jennifer Lynch, staff attorney for the privacy-focused Electronic Frontier Foundation (EFF), notes there were at least 14 million photographs in the NGI face recognition database as of 2012. What’s more, the NGI database makes no distinction between criminal biometrics and those collected for civil service jobs. “All of a sudden, your image that you uploaded for a civil purpose to get a job is searched every time there’s a criminal query,” Lynch says. “You could find yourself having to defend your innocence.”
Through a federal lawsuit, EFF obtained redacted NGI documents that it will soon publish. documents show that by 2015, the FBI estimates that NGI will include 46 million criminal face images and 4.3 million civil face images. The vendor building the face recognition system, MorphoTrust, was asked to design it to receive up to 55,000 direct photo enrollments per day and 2,300 per hour, as well as process 34,000 photo retrievals per day and 1,400 per hour. The statistics hint at the sheer scale of the face recognition infrastructure under construction—in one year, over 20 million Americans could be put into the system.
Documents also show that Michigan, Florida, Kansas, South Carolina, South Dakota, Hawaii and Maryland likely have already incorporated their criminal mug-shot databases into the system and that 11 more states are in discussions to work with NGI, including New York.
"Americans cannot easily take precautions against the covert, remote and mass capture of their images," Lynch said in the EFF’s lawsuit statement. In a world where any camera could be used to grab a face-print, it’s impossible to know where your identity will end up. To assert control, we must determine if we have a right to our own faces.
   Elevator sign with surveillance monitor. JPM/Corbis    
Warrantless Collection
Some legal precedents suggest that we do have a modicum of control over personal biometric data. The 1969 Supreme Court case Davis v. Mississippi determined that using fingerprints (a form of biometrics) obtained without a warrant or probable cause for arrest cannot be used in court. Likewise, “the warrantless collection and use of face-prints by law enforcement is unlikely to overcome the hurdle of the Fourth Amendment,” Kirill Levashov writes in The Columbia Science and Technology Law Review. (The collection of biometrics from individuals who have been legally arrested is protected under cases like 2013’s Maryland v. King.)
Yet despite cases like Davis v. Mississippi, noncriminal biometric information is already being included in criminal investigations, according to Lynch. “Law enforcement agencies are already using Department of Motor Vehicles databases,” she says. “We know that law enforcement agencies including the FBI are searching those databases for criminal purposes”—meaning that any time citizens have their photo taken in a governmental capacity, whether it’s a background check or a driver’s license, their faces are liable to be analyzed by NGI.
In the private sector, efforts are being made to ensure face recognition isn’t abused, but standards are similarly vague. A 2012 Federal Trade Commission report recommends that companies should obtain “affirmative express consent before collecting or using biometric data from facial images.” Facebook collects face-prints by default, but users can opt out of having their face-prints collected.
Technology entrepreneurs argue that passing strict laws before face recognition technology matures will hamper its growth. “What I’m worried about is policies being made inappropriately before their time,” Animetrics’s Schuepp says. “I don’t think it’s face recognition we want to pick on.” He suggests that the technology itself is not the problem; rather, it’s how the biometrics data are controlled.
Yet precedents for biometric surveillance must be set early in order to control its application. “I would like to see regulation of this before it goes too far,” Lynch says. “There should be laws to prevent misuse of biometric data by the government and by private companies. We should decide whether we want to be able to track people through society or not.”
Impossible to Be Anonymous
What would a world look like with comprehensive biometric surveillance? “If cameras connected to databases can do face recognition, it will become impossible to be anonymous in society,” Lynch says. That means every person in the U.S. would be passively tracked at all times. In the future, the government could know when you use your computer, which buildings you enter on a daily basis, where you shop and where you drive. It’s the ultimate fulfillment of Big Brother paranoia.
But anonymity isn’t going quietly. Over the past several years, mass protests have disrupted governments in countries across the globe, including Egypt, Syria and Ukraine. “It’s important to go out in society and be anonymous,” Lynch says. But face recognition could make that impossible. A protester in a crowd could be identified and fired from a job the next day, never knowing why. A mistaken face-print algorithm could mark the wrong people as criminals and force them to escape the specter of their own image.
If biometric surveillance is allowed to proliferate unchecked, the only option left is to protect yourself from it. Artist Zach Blas has made a series of bulbous masks, aptly named the “Facial Weaponization Suite,” that prepare us for just such a world. The neon-colored masks both disguise the wearer and make the rest of us more aware of how our faces are being politicized.
"These technologies are being developed by police and the military to criminalize large chunks of the population," Blas says of biometrics. If cameras can tell a person’s identity, background and whereabouts, what’s to stop the algorithms from making the same mistakes as governmental authorities, giving racist or sexist biases a machine-driven excuse? "Visibility," he says, "is a kind of trap."

TL;DR - Biometric surveillance is being used to not only track individuals, but to profile them as well. Legal loopholes (e.g. there is no law stopping the government from accessing your webcam footage) and gray areas (e.g. driver’s license photos being analyzed in criminal cases) make accountability unrealistic. Federal law enforcement agencies are working with commercial enterprises to create massive databases that can absorb tens of thousands of new profiles per day (and several state agencies have already handed over what biometric data they’ve collected so far). We’re heading into a surveillance state in which all of us are passively monitored at all times, with that information being stored away in some database for later recall. [An interesting note from this article is that Janice Kephart, founder of the biometrics lobbying group SIBA and the most interesting character in this article, previously served as counsel to the 9/11 Commission. Take that as you will.]

Biometric Surveillance Means Someone Is Always Watching

By / April 17, 2014 6:06 AM EDT

Incrimination by selfie can happen.  

From 2008 to 2010, as Edward Snowden has revealed, the National Security Agency (NSA) collaborated with the British Government Communications Headquarters to intercept the webcam footage of over 1.8 million Yahoo users.

The agencies were analyzing images they downloaded from webcams and scanning them for known terrorists who might be using the service to communicate, matching faces from the footage to suspects with the help of a new technology called face recognition.

The outcome was pure Kafka, with innocent people being caught in the surveillance dragnet. In fact, in attempting to find faces, the Pentagon’s Optic Nerve program recorded webcam sex by its unknowing targets—up to 11 percent of the material the program collected was “undesirable nudity” that employees were warned not to access, according to documents. And that’s just the beginning of what face recognition technology might mean for us in the digital era.

Over the past decade, face recognition has become a fast-growing commercial industry, moving from its governmental origins—programs like Optic Nerve—into everyday life. The technology is being pitched as an effective tool for securely confirming identities, with the financial backing of a new Washington lobbying firm, the Secure Identity & Biometrics Association (SIBA).

To some, face recognition sounds benign, even convenient. Walk up to the international checkpoint in a German airport, gaze up at a camera, and walk into the country without ever needing to pull out a passport—your image is on file, the camera knows who you are. Wander into a retail store and be greeted with personalized product suggestions—the store’s network has a record of what you bought last time. Facebook already uses face recognition to recommend which friends to tag in your photos.

But the technology has a dark side. The U.S. government is in the process of building the world’s largest cache of face recognition data, with the goal of identifying every person in the country. The creation of such a database would mean that anyone could be tracked wherever his or her face appears, whether it’s on a city street or in a mall. Today’s laws don’t protect Americans from having their webcams scanned for facial data.

Security CCTV. Peter Marlow/Magnum Security CCTV. Peter Marlow/Magnum

Not That Perfect

Face recognition systems have two components: an algorithm and a database. The algorithm is a computer program that takes an image of a face and deconstructs it into a series of landmarks and proportional patterns—the distance between eye centers, for example. This process of turning unique biological characteristics into quantifiable data is known as biometrics.

Together, the facial data points create a “face-print” that, like a fingerprint, is unique to each individual. Some faces are described as open books; at a glance, a person can be “read.” Face recognition technology makes that metaphor literal. “We can extrapolate enough data from the eye and nose region, from ear to ear, to build a demographic profile,” including an individual’s age range, gender and ethnicity, says Kevin Haskins, a business development manager at the face recognition company Cognitec.

Face-prints are collected into databases, and a computer program compares a new image or piece of footage with the database for matches. Cognitec boasts a match accuracy rate of 98.75 percent, an increase of over 20 percent over the past decade. Facebook recently achieved 97.25 percent accuracy after acquiring biometrics company Face.com in 2012.

So far, the technology has its limits. “The naive layman thinks face recognition is out there and can catch you anytime, anywhere, and your identity is not anonymous anymore,” says Paul Schuepp, the co-founder of Animetrics, a decade-old face recognition company based in New Hampshire. “We’re not that perfect yet.”

The lighting and angle of faces images must be strictly controlled to create a usable face-print. Enrollment is the slightly Orwellian industry term for making a print and entering an individual into a face recognition database. “Good enrollment means getting a really good photograph of the frontal face, looking straight on, seeing both eyes and both ears,” Schuepp explains.

How face recognition is already being used hints at just how pervasive it could become. It’s being used on military bases to control who has access to restricted areas. In Iraq and Afghanistan, it was used to check images of detainees in the field against Al-Qaeda wanted lists. The Seattle police department is already applying the technology to identify suspects on video footage.

The technology’s presence is subtle, and as it gets integrated into devices we already use, it will be easy to overlook. The most dystopian example might be NameTag, a startup that launched in February promising to embed face recognition in wearable computers like Google Glass. The software would allow you to look across a crowded bar and identify the anonymous cutie you are scoping out. The controversial company also brags that its product can identify sex offenders on sight.

As the scale of face recognition grows, there’s a chance it could take its place in the technological landscape as seamlessly as the iPhone. But to allow that to happen would mean ignoring the increasing danger that it will be misused.

Monitors show imagery from security cameras seen at the Lower Manhattan Security Initiative on April 23, 2013 in New York, NY.  At the counter-terrorism center, police and private security personnel monitor more than 4,000 surveillance cameras and license plate readers mounted around the Financial District and surrounding parts of Lower Manhattan. credit: John Moore/Getty Monitors show imagery from security cameras seen at the Lower Manhattan Security Initiative on April 23, 2013 in New York, NY. At the counter-terrorism center, police and private security personnel monitor more than 4,000 surveillance cameras and license plate readers mounted around the Financial District and surrounding parts of Lower Manhattan. credit: John Moore/Getty

Inescapable Security Net

By licensing their technology to everyone from military defense contractors to Internet start-ups, companies like Cognitec and Animetrics are churning a global biometrics industry that will grow to $20 billion by 2020, according to Janice Kephart, the founder of SIBA. With funding from a coalition of face recognition businesses, SIBA launched in February 2014 to “educate folks about the reality of biometrics, bridging the gap between Washington and the industry,” says Kephart, who previously worked as a counsel to the 9/11 Commission. “The Department of Homeland Security hasn’t done anything on this for 16 years. America is falling way behind the rest of the world.”

Kephart believes biometric technology could have prevented the 9/11 attacks (which she says “caused a surge” in the biometrics industry) and Edward Snowden’s NSA leaks. She emphasizes the technology’s protective capabilities rather than its potential for surveillance. “Consumers will begin to see that biometrics delivers privacy and security at the same time,” she says.

It’s this pairing of seeming opposites that makes face recognition so difficult to grapple with. By identifying individuals, it can prevent people from being where they shouldn’t be. Yet the profusion of biometrics creates an inescapable security net with little privacy and the potential for serious mistakes with dire consequences. An error in the face recognition system could cause the ultimate in identity theft, with a Miley Cyrus look-alike dining on Miley’s dime or a hacker giving your digital passport (and citizenship) to a stranger.

Some in government express concern over the potential for abuse. U.S. Senator Al Franken, D-Minn., has become a leading figure in the debate, noting in 2013 that face recognition “has profound implications for privacy”—namely, that there won’t be any. In a February 2014 letter to NameTag, he urged the company to delay its product “until best practices for facial recognition technology are established.”

Franken’s suggestion points out the biggest problem with face recognition’s future. Legal boundaries for the technology have not been set; we know that public face recognition data is being collected, but we don’t know how it is being accessed or used.

Contrary to Kephart’s assertions, the federal government has been quite busy with biometrics. This summer, the FBI is focusing on face recognition with the fourth step of its Next Generation Identification (NGI) program, a $1.2 billion initiative launched in 2008 to build the world’s largest biometric database. By 2013, the database held 73 million fingerprints, 5.7 million palm prints, 8.1 million mug shots and 8,500 iris scans. Interfaces to access the system are being provided free of charge to local law enforcement authorities.

Jennifer Lynch, staff attorney for the privacy-focused Electronic Frontier Foundation (EFF), notes there were at least 14 million photographs in the NGI face recognition database as of 2012. What’s more, the NGI database makes no distinction between criminal biometrics and those collected for civil service jobs. “All of a sudden, your image that you uploaded for a civil purpose to get a job is searched every time there’s a criminal query,” Lynch says. “You could find yourself having to defend your innocence.”

Through a federal lawsuit, EFF obtained redacted NGI documents that it will soon publish. documents show that by 2015, the FBI estimates that NGI will include 46 million criminal face images and 4.3 million civil face images. The vendor building the face recognition system, MorphoTrust, was asked to design it to receive up to 55,000 direct photo enrollments per day and 2,300 per hour, as well as process 34,000 photo retrievals per day and 1,400 per hour. The statistics hint at the sheer scale of the face recognition infrastructure under construction—in one year, over 20 million Americans could be put into the system.

Documents also show that Michigan, Florida, Kansas, South Carolina, South Dakota, Hawaii and Maryland likely have already incorporated their criminal mug-shot databases into the system and that 11 more states are in discussions to work with NGI, including New York.

"Americans cannot easily take precautions against the covert, remote and mass capture of their images," Lynch said in the EFF’s lawsuit statement. In a world where any camera could be used to grab a face-print, it’s impossible to know where your identity will end up. To assert control, we must determine if we have a right to our own faces.

Elevator sign with surveillance monitor. JPM/Corbis Elevator sign with surveillance monitor. JPM/Corbis

Warrantless Collection

Some legal precedents suggest that we do have a modicum of control over personal biometric data. The 1969 Supreme Court case Davis v. Mississippi determined that using fingerprints (a form of biometrics) obtained without a warrant or probable cause for arrest cannot be used in court. Likewise, “the warrantless collection and use of face-prints by law enforcement is unlikely to overcome the hurdle of the Fourth Amendment,” Kirill Levashov writes in The Columbia Science and Technology Law Review. (The collection of biometrics from individuals who have been legally arrested is protected under cases like 2013’s Maryland v. King.)

Yet despite cases like Davis v. Mississippi, noncriminal biometric information is already being included in criminal investigations, according to Lynch. “Law enforcement agencies are already using Department of Motor Vehicles databases,” she says. “We know that law enforcement agencies including the FBI are searching those databases for criminal purposes”—meaning that any time citizens have their photo taken in a governmental capacity, whether it’s a background check or a driver’s license, their faces are liable to be analyzed by NGI.

In the private sector, efforts are being made to ensure face recognition isn’t abused, but standards are similarly vague. A 2012 Federal Trade Commission report recommends that companies should obtain “affirmative express consent before collecting or using biometric data from facial images.” Facebook collects face-prints by default, but users can opt out of having their face-prints collected.

Technology entrepreneurs argue that passing strict laws before face recognition technology matures will hamper its growth. “What I’m worried about is policies being made inappropriately before their time,” Animetrics’s Schuepp says. “I don’t think it’s face recognition we want to pick on.” He suggests that the technology itself is not the problem; rather, it’s how the biometrics data are controlled.

Yet precedents for biometric surveillance must be set early in order to control its application. “I would like to see regulation of this before it goes too far,” Lynch says. “There should be laws to prevent misuse of biometric data by the government and by private companies. We should decide whether we want to be able to track people through society or not.”

Impossible to Be Anonymous

What would a world look like with comprehensive biometric surveillance? “If cameras connected to databases can do face recognition, it will become impossible to be anonymous in society,” Lynch says. That means every person in the U.S. would be passively tracked at all times. In the future, the government could know when you use your computer, which buildings you enter on a daily basis, where you shop and where you drive. It’s the ultimate fulfillment of Big Brother paranoia.

But anonymity isn’t going quietly. Over the past several years, mass protests have disrupted governments in countries across the globe, including Egypt, Syria and Ukraine. “It’s important to go out in society and be anonymous,” Lynch says. But face recognition could make that impossible. A protester in a crowd could be identified and fired from a job the next day, never knowing why. A mistaken face-print algorithm could mark the wrong people as criminals and force them to escape the specter of their own image.

If biometric surveillance is allowed to proliferate unchecked, the only option left is to protect yourself from it. Artist Zach Blas has made a series of bulbous masks, aptly named the “Facial Weaponization Suite,” that prepare us for just such a world. The neon-colored masks both disguise the wearer and make the rest of us more aware of how our faces are being politicized.

"These technologies are being developed by police and the military to criminalize large chunks of the population," Blas says of biometrics. If cameras can tell a person’s identity, background and whereabouts, what’s to stop the algorithms from making the same mistakes as governmental authorities, giving racist or sexist biases a machine-driven excuse? "Visibility," he says, "is a kind of trap."

ascannerprogressive:

[46/365]
TL;DR - The CEO of European publisher Axel Springer foresees a Google that is all-powerful and unaccountable. I’m of two minds on this: basically, I believe the market should decide when Google’s gone too far, but I also believe that Google’s already gone too far and they show no signs of stopping. However, the company has a cozy relationship with the American government (for a start), so I don’t think that appealing to such a government is going to yield any effective reform.

Major media publisher admits it is “afraid of Google”
Worries the search giant is turning into a “superstate” immune from prosecution. 
 by Olivia Solon, wired.co.uk Apr 20 2014, 4:00pm EST 
The chief executive of Axel Springer, one of Europe’s largest media publishers, has said that his company is afraid of the power that Google has accumulated and worries that the search giant is becoming a “superstate,” immune from regulation.
Mathias Döpfner published an open letter to Google’s executive chairman, Eric Schmidt, in the German newspaper Frankfurter Allgemeine Zeitung in which he points out that Google is not only the largest search engine in the world, but the largest video platform, the largest browser, and the most used e-mail service and mobile operating system. The open letter was published as a response to a guest column written by Schmidt in the same newspaper.
Döpfner goes on to talk about the “schizophrenic” relationship between Axel Springer and Google. On one hand the publisher is part of a European antitrust lawsuit against the search giant, while it also relies on Google’s traffic and ad revenue. “We know of no alternative that even begins to offer similar technological requirements for automated advertising sales, and we cannot do without this source of income,” he says.
He refers to a case where a change to Google’s algorithm led to a drop in traffic to an Axel Springer subsidiary of 70 percent: “This is a real case. And that subsidiary is a competitor of Google… I am sure it is a coincidence.”
"We are afraid of Google," he added.
He went on to talk of Google’s monopoly, with its 90-percent market share in web searches (March 2014 figures). “The market belongs to only one,” he said.
He points out that Google lists its own products—from commerce to Google+ profiles—higher up than competitor results, even if the competitor website has more visitors. “This is called abuse of a dominant position,” he says. Despite this, the European Commission effectively sanctioned Google’s approach as long as Google offers a new advertising position at the start of the search list where the discriminated company can pay to advertise.
"This is not a compromise," said Döpfner, "this is the EU officially sanctioning your business model, which is called ‘protection money’ in less honorable circles."
Döpfner also makes reference to the “if you have nothing to hide, you have nothing to fear” argument espoused at different times by Schmidt and Facebook’s Mark Zuckerberg, pointing out that such words could also come from the “head of the Stasi” or another dictator’s intelligence agency.
"Google knows more about every digital citizen than George Orwell dared to imagine in his wildest visions of 1984," he says. Döpfner is particularly concerned about comments made by founder Larry Page, who said that there are lots of things the company would like to do but can’t do because they are illegal—pesky antitrust and privacy laws get in the way. Google has also expressed an interest in building floating working environments—for “seasteading.”
"Does this mean that Google is planning on operating in a legal vacuum? A kind of super-state that can navigate its floating kingdom past all nation states?" he asks.
There is hope, however, argues Döpfner. Google could set a good example. Google could lead by example and create transparency by producing search results based on “clear quantitative criteria” and dealing with algorithm changes openly.
Read the damning assessment of Google in full.
This story originally appeared in Wired UK.

TL;DR - The CEO of European publisher Axel Springer foresees a Google that is all-powerful and unaccountable. I’m of two minds on this: basically, I believe the market should decide when Google’s gone too far, but I also believe that Google’s already gone too far and they show no signs of stopping. However, the company has a cozy relationship with the American government (for a start), so I don’t think that appealing to such a government is going to yield any effective reform.

Major media publisher admits it is “afraid of Google”

Worries the search giant is turning into a “superstate” immune from prosecution.

The chief executive of Axel Springer, one of Europe’s largest media publishers, has said that his company is afraid of the power that Google has accumulated and worries that the search giant is becoming a “superstate,” immune from regulation.

Mathias Döpfner published an open letter to Google’s executive chairman, Eric Schmidt, in the German newspaper Frankfurter Allgemeine Zeitung in which he points out that Google is not only the largest search engine in the world, but the largest video platform, the largest browser, and the most used e-mail service and mobile operating system. The open letter was published as a response to a guest column written by Schmidt in the same newspaper.

Döpfner goes on to talk about the “schizophrenic” relationship between Axel Springer and Google. On one hand the publisher is part of a European antitrust lawsuit against the search giant, while it also relies on Google’s traffic and ad revenue. “We know of no alternative that even begins to offer similar technological requirements for automated advertising sales, and we cannot do without this source of income,” he says.

He refers to a case where a change to Google’s algorithm led to a drop in traffic to an Axel Springer subsidiary of 70 percent: “This is a real case. And that subsidiary is a competitor of Google… I am sure it is a coincidence.”

"We are afraid of Google," he added.

He went on to talk of Google’s monopoly, with its 90-percent market share in web searches (March 2014 figures). “The market belongs to only one,” he said.

He points out that Google lists its own products—from commerce to Google+ profiles—higher up than competitor results, even if the competitor website has more visitors. “This is called abuse of a dominant position,” he says. Despite this, the European Commission effectively sanctioned Google’s approach as long as Google offers a new advertising position at the start of the search list where the discriminated company can pay to advertise.

"This is not a compromise," said Döpfner, "this is the EU officially sanctioning your business model, which is called ‘protection money’ in less honorable circles."

Döpfner also makes reference to the “if you have nothing to hide, you have nothing to fear” argument espoused at different times by Schmidt and Facebook’s Mark Zuckerberg, pointing out that such words could also come from the “head of the Stasi” or another dictator’s intelligence agency.

"Google knows more about every digital citizen than George Orwell dared to imagine in his wildest visions of 1984," he says. Döpfner is particularly concerned about comments made by founder Larry Page, who said that there are lots of things the company would like to do but can’t do because they are illegal—pesky antitrust and privacy laws get in the way. Google has also expressed an interest in building floating working environments—for “seasteading.”

"Does this mean that Google is planning on operating in a legal vacuum? A kind of super-state that can navigate its floating kingdom past all nation states?" he asks.

There is hope, however, argues Döpfner. Google could set a good example. Google could lead by example and create transparency by producing search results based on “clear quantitative criteria” and dealing with algorithm changes openly.

Read the damning assessment of Google in full.

This story originally appeared in Wired UK.

vgjunk:

Paranoia, PC Engine.

vgjunk:

Paranoia, PC Engine.

TL;DR - On paper, America may be a democratic republic, but in practice it is an oligarchy of special interests. One of the most effective lobbyists in this corptocratic system, at home and abroad, is Google (and Facebook isn’t far behind). With Google’s ubiquitous data collection (Gmail, G+, Google Glass, Google Wallet) and its friendly relations with the world’s governments, it isn’t difficult to imagine the danger such an organization poses.

Google: the unelected superpower
Google has cosied up to governments around the world so effectively that its chairman is a White House advisor 
 By  Katherine Rushton
6:38PM BST 17 Apr 2014

Researchers at Princeton and Northwestern universities have pored over 1,800 US policies and concluded that America is an oligarchy. Instead of looking out for the majority of the country’s citizens, the US government is ruled by the interests of the rich and the powerful, they found. No great surprises there, then.


But the government is not the only American power whose motivations need to be rigourously examined. Some 2,400 miles away from Washington, in Silicon Valley, Google is aggressively gaining power with little to keep it in check.


It has cosied up to governments around the world so effectively that its chairman, Eric Schmidt, is a White House advisor. In Britain, its executives meet with ministers more than almost any other corporation.


Google can’t be blamed for this: one of its jobs is to lobby for laws that benefit its shareholders, but it is up to governments to push back. As things stand, Google – and to a lesser extent, Facebook – are in danger of becoming the architects of the law.




Meanwhile, these companies are becoming ever more sophisticated about the amount of information they access about users. Google scans our emails. It knows where we are. It anticipates what we want before we even know it. Sure there are privacy settings and all that, but surrendering to Google also feels nigh on impossible to avoid if you want to live in the 21st century. It doesn’t stop there either. If Google Glass is widely adopted, it will be able to clock everything we see, while the advance of Google Wallet could position the company at the heart of much of the world’s spending.



One source at the technology giant put it well when she referred to the company as an “unelected superpower”. I think this is a fair summary. So far, we are fortunate that that dictatorship is a relatively benign one. The company’s mantra is “do no evil”, and while people may disagree on what evil means, broadly speaking, its founders are pretty good guys. But Larry Page and Sergey Brin will not be around forever. Nor should we rely on any entity that powerful to regulate its own behaviour.
The government in America, and its counterparts around the world, should stop kowtowing to Google and instead work in concert to keep this and any other emerging corporate superpowers firmly in check.

TL;DR - On paper, America may be a democratic republic, but in practice it is an oligarchy of special interests. One of the most effective lobbyists in this corptocratic system, at home and abroad, is Google (and Facebook isn’t far behind). With Google’s ubiquitous data collection (Gmail, G+, Google Glass, Google Wallet) and its friendly relations with the world’s governments, it isn’t difficult to imagine the danger such an organization poses.

Google: the unelected superpower

Google has cosied up to governments around the world so effectively that its chairman is a White House advisor

6:38PM BST 17 Apr 2014

Researchers at Princeton and Northwestern universities have pored over 1,800 US policies and concluded that America is an oligarchy. Instead of looking out for the majority of the country’s citizens, the US government is ruled by the interests of the rich and the powerful, they found. No great surprises there, then.

But the government is not the only American power whose motivations need to be rigourously examined. Some 2,400 miles away from Washington, in Silicon Valley, Google is aggressively gaining power with little to keep it in check.

It has cosied up to governments around the world so effectively that its chairman, Eric Schmidt, is a White House advisor. In Britain, its executives meet with ministers more than almost any other corporation.

Google can’t be blamed for this: one of its jobs is to lobby for laws that benefit its shareholders, but it is up to governments to push back. As things stand, Google – and to a lesser extent, Facebook – are in danger of becoming the architects of the law.

Meanwhile, these companies are becoming ever more sophisticated about the amount of information they access about users. Google scans our emails. It knows where we are. It anticipates what we want before we even know it. Sure there are privacy settings and all that, but surrendering to Google also feels nigh on impossible to avoid if you want to live in the 21st century. It doesn’t stop there either. If Google Glass is widely adopted, it will be able to clock everything we see, while the advance of Google Wallet could position the company at the heart of much of the world’s spending.

One source at the technology giant put it well when she referred to the company as an “unelected superpower”. I think this is a fair summary. So far, we are fortunate that that dictatorship is a relatively benign one. The company’s mantra is “do no evil”, and while people may disagree on what evil means, broadly speaking, its founders are pretty good guys. But Larry Page and Sergey Brin will not be around forever. Nor should we rely on any entity that powerful to regulate its own behaviour.

The government in America, and its counterparts around the world, should stop kowtowing to Google and instead work in concert to keep this and any other emerging corporate superpowers firmly in check.

The company’s name, “Mazda,” derives from Ahura Mazda, a god of the earliest civilizations in West Asia. We have interpreted Ahura Mazda, the god of wisdom, intelligence and harmony, as the symbol of the origin of both Eastern and Western civilizations, and also as a symbol of automobile culture. It incorporates a desire to achieve world peace and the development of the automobile manufacturing industry. It also derives from the name of our founder, Jujiro Matsuda.
Mazda Motor Corporation’s official site (see the side bar on the right). Additional information can be found here.

TL;DR - Google has purchased solar-powered drone manufacturer Titan Aerospace, outbidding Facebook in their effort to take internet access global.

Google Buys Drone Company Titan Aerospace

Apr. 14, 2014, 2:03 PM

Google has acquired drone maker Titan Aerospace, the Wall Street Journal reports.

Titan is a New Mexico-based company that makes high-flying solar powered drones.

There’s no word on the price Google paid, but Facebook had been in talks to acquire the company earlier this year for a reported $60 million. Presumably, Google paid more than that to keep it away from Facebook. 

It sounds like Titan will work on a variety of projects for Google. 

  • Titan will be able to collect photos from around the planet from high up, which could help with Google Earth and Google Maps. 
  • It will also contribute to Google’s Project Loon, which is sending balloons into the atmosphere which then beams Internet to parts of the world that are not yet connected. 
  • It’s also likely to work with Makani, another company Google bought, that gets wind power high in the sky, and delivers the energy back to earth through a long cable.

Google confirmed the acquisition to the Journal, and a spokesperson said, "It’s still early days, but atmospheric satellites could help bring internet access to millions of people, and help solve other problems, including disaster relief and environmental damage like deforestation."

Titan’s drones could potentially be in the air for five years at a time, relying on solar power to stay aloft, according to a report from last year

Here’s what Titan’s drones look like:

titan drone

Titan Aerospace

titan drone

Titan Aerospace

arxsec:

Attackers exploit Heartbleed vulnerability to bypass multifactor authentication

Security company Mandiant has reportedly said that a walled-off virtual private network of a client was breached by attackers using the Heartbleed …

arxsec:

Attackers exploit Heartbleed vulnerability to bypass multifactor authentication