Tag Archives: Facebook Data Policy

Reality Check: Is Your Personal Data Safe Online?

The Facebook scandal involving personal data mishandled by Cambridge Analytica has raised concerns over the privacy of the information we share on our social media accounts.

Some countries have gone as far as to legislate Internet data privacy with laws granting the “right to be forgotten.”

Yet Facebook CEO Mark Zuckerberg says we don’t need such regulations here in the states. Is he right?

This is a Reality Check you won’t get anywhere else.

It’s an unsettling thought: your personal data, being manipulated on a global scale. Where you live, what kind of car you drive, how many children you have, what food you eat, how much you money you earn, what clothes you wear, how you exercise, the list goes on and on.

While other countries are tightening laws on Internet privacy and how corporations can use your data, such as the UK’s data protection law with its “right to be forgotten,” the United States seems to be stuck in the 1980s on the issue.

In California, privacy is a right in the state constitution. “Privacy” was added to the state’s “inalienable rights” by the legislature in 1972.

And though California has been a leader in privacy, the last meaningful update to the state’s privacy laws was in the 1980s, long before today’s technology.

For context, Census data shows that in 1989, 15 percent of American households owned a computer.

Today, according to Pew Research, 77 percent of Americans have a smartphone—a computer in their pocket or purse.

And in 2015, those smartphone owners used about 27 smartphone apps per month, according to Statista.

Just think about all of the information you give to the apps on your smartphone. Do you read their terms of use?

You know you don’t. And yet, a California-based group called the Californians for Consumer Privacy has raised concern about how our information is collected and sold.

From that group came the California Consumer Privacy Act. The act is intended to not only hold major corporations making $50 million per year or more responsible for their consumers’ data, but also giving Californians the right to know where and to whom their data is being disclosed or sold, and if their data is being properly protected.

There’s nothing in California today that allows users see what data has been collected on them. And data is being collected everywhere you go.

From the checkout at Target, to your Facebook account, browsing the Internet or even just walking on a city street—credit cards are being swiped, messages are being shared, and cameras are recording.

So are the rules of how businesses use your data fair and respectful of your privacy?

One of the key aspects of the California Consumer Privacy Act is a right of action against companies that store data but have not taken reasonable steps to secure that data. That means consumers can sue companies that didn’t protect their data.

What exactly “reasonable steps” means needs to be fleshed out in the courts, but there are plenty of examples of companies that didn’t take “reasonable steps” until after data was compromised.

From December 19, 2013, “Target says hackers breached its system and stole 40 million credit card numbers.”

From September 18, 2014, “Almost immediately after word broke that Home Depot had been hacked, security experts were noting that the breach was likely even worse than the massive Target that had preceded it.”

From October 2, 2014, “JP Morgan just revealing that an August data breach could affect 76 million households.”

From February 5, 2015, “One of America’s largest health insurers, Anthem, this morning confirmed a massive data breach. Reports say hackers may have stolen up to 80 million records. No credit card or medical information is in danger, but Social Security numbers, birthdays and addresses may have been compromised.”

What you need to know is that when we provide information to a corporation, we establish a relationship.

We believe the corporation will use our information for the purpose of their service. Once your information is outside of the intended use, it’s nearly impossible to control it.

And third party sharing of your data allows it is be used, shared and disseminated without any control on your part. Big data is powerful force in the United States. But should big data be allowed to do whatever it wants with your information. If not, how do we, as the public, get some control back?

Let’s talk about that, right now, on social media, while someone collects our data.

Trust Lost: How Social Media Users’ Data Should Be Protected

Over the past few days, there has been public outrage over the way Facebook is handling personal data. This was brought to light by the recent scandal with Cambridge Analytica, but really should come as no surprise as Facebook has been treating its users this way since it launched back in 2004.

What Happened with Cambridge Analytica

The story that broke over the past few days is really just a piece of a much larger issue with Facebook and how they handle personal data. To summarize, a developer named Aleksandr Kogan developed an application in 2014 offering a personality quiz to Facebook users. About 270,000 users took the quiz, but in doing so they granted Kogan’s app access to not only their Facebook data, but the data of ALL of their Facebook friends as well— meaning the app now had data on 50 million users. Kogan then provided this data to Cambridge Analytica, who used it to create over 30 million psychographic profiles about potential voters.

Facebook is at fault for a major data breach because it failed to protect the personally identifiable information of its users.

1) Data Policy

Up until 2014, Facebook’s policy allowed for an application developer to ask permission from Facebook users to access their data. However, it also allowed the apps to collect that same data about ALL of that user’s friends on Facebook, without consent. Facebook changed this policy in 2014 to ensure that apps could not collect data on user’s friends, but at that point, the damage had been done.

An ex-Facebook employee on the privacy team stated, “At a company that was deeply concerned about protecting its users, this situation would have been met with a robust effort to cut off developers who were making questionable use of data. But when I was at Facebook, the typical reaction I recall looked like this: try to put any negative press coverage to bed as quickly as possible, with no sincere efforts to put safeguards in place or to identify and stop abusive developers. When I proposed a deeper audit of developers’ use of Facebook’s data, one executive asked me, ‘Do you really want to see what you’ll find?’”

2) Data Collection

The crux of the issue lies in the data that Facebook requires from new users in the first place. You are forced to provide your first name, last name, email or phone number, birthday and gender. The reason is simple – this information is a marketer’s (or a politician’s) targeting dream, and also the key to Facebook’s revenue and entire business model.

Facebook, however, does share blame with its users because when you sign up, you are agreeing to hand over rights regarding your personal information, so there is the level of consent. However, Facebook further deceives and confuses their users into thinking their information will not be disclosed by providing layers of permissions and privacy settings. If users were more aware about how much of their personal data was actually public (or being shared with government), they would be much more reserved in giving it away. Your personal data on Facebook is not private, and Zuckerberg has known this since the beginning. Obviously he has attempted to apologize for the below comments, but he has demonstrated no tangible actions to actually address the public concern.

How can you protect YOUR data?

1) Allow users to be anonymous if they want, untracked and free from surveillance and spying.

Anonymity means that site data is de-identified and not traceable to a person. As a result nearly all users data is public by default.

2) Maintain zero-knowledge on sensitive data.

This is essential in ensuring that users can chat freely with each other without the concern that the conversation is being monitored by anyone including Minds. All sensitive data on Minds is encrypted end to end whether in motion or at rest and original content is the property of the user.

3) 100% free and open source for public accountability and inspection.

Unlike top proprietary social networks, social media should be open source.  You must be able to inspect code and even help contribute and build the network. This provides much needed community ownership and transparency into what the platform is actually doing, as opposed to simply taking their word for it.

This debate exposes the paradox between transparency and privacy, both of which are core principles of Internet freedom pioneers. Facebook has gotten themselves into a deadly trap by pretending they are giving people privacy with layered permissions levels and supposed ‘privacy’ settings while also exposing massive amounts of data without consent. They are handing over data to the highest bidders and the user has lost all control.

Social Networks must give that control back to the people. This is only the beginning of the privacy movement and we all must join together for a better future for everyone.  

About the author:
Bill Ottman is an American internet entrepreneur, freedom of information activist and hacker based in New York City, best known as the CEO and co-founder of Minds, an open-source social networking service. He is a graduate of the University of Vermont, and co-founded Minds with John Ottman and Mark Harding in 2011.