api

Seized cache of Facebook docs raise competition and consent questions

Posted by | Android, api, competition, Damian Collins, data protection law, DCMS committee, Developer, Europe, european union, Facebook, Mark Zuckerberg, Onavo, Policy, privacy, Six4Three, Social, social network, terms of service, United Kingdom, vpn | No Comments

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

  • White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.

Facebook responded

  • Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.

In their response Facebook contends that this was essentially another “cherrypicked” topic and that the company “ultimately settled on a model where developers did not need to purchase advertising to access APIs and we continued to provide the developer platform for free.”

  • Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.
  • Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
  • Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.
  • Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business.

Update: 11:40am

Facebook has posted a lengthy response (read it here) positing that the “set of documents, by design, tells only one side of the story and omits important context.” They give a blow-by-blow response to Collins’ points below though they are ultimately pretty selective in what they actually address.

Generally they suggest that some of the issues being framed as anti-competitive were in fact designed to prevent “sketchy apps” from operating on the platform. Furthermore, Facebook details that they delete some old call logs on Android, that using “market research” data from Onava is essentially standard practice and that users had the choice whether data was shared reciprocally between FB and developers. In regard to specific competitors’ apps, Facebook appears to have tried to get ahead of this release with their announcement yesterday that it was ending its platform policy of banning apps that “replicate core functionality.” 

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision,” he writes. 

Osofsky’s email is followed by what looks like a big thumbs up from Zuckerberg, who replies: “Yup, go for it.”

Also of concern on the competition front is Facebook’s use of a VPN startup it acquired, Onavo, to gather intelligence on competing apps — either for acquisition purposes or to target as a threat to its business.

The files show various Onavo industry charts detailing reach and usage of mobile apps and social networks — with each of these graphs stamped ‘highly confidential’.

Facebook bought Onavo back in October 2013. Shortly after it shelled out $19BN to acquire rival messaging app WhatsApp — which one Onavo chart in the cache indicates was beasting Facebook on mobile, accounting for well over double the daily message sends at that time.

Onavo charts are quite an insight into facebook’s commanding view of the app-based attention marketplace pic.twitter.com/Ezdaxk6ffC

— David Carroll 🦅 (@profcarroll) December 5, 2018

The files also spotlight several issues of concern relating to privacy and data protection law, with internal documents raising fresh questions over how or even whether (in the case of Facebook’s whitelisting agreements with certain developers) it obtained consent from users to process their personal data.

The company is already facing a number of privacy complaints under the EU’s GDPR framework over its use of ‘forced consent‘, given that it does not offer users an opt-out from targeted advertising.

But the Six4Three files look set to pour fresh fuel on the consent fire.

Collins’ fourth line item — related to an Android upgrade — also speaks loudly to consent complaints.

Earlier this year Facebook was forced to deny that it collects calls and SMS data from users of its Android apps without permission. But, as we wrote at the time, it had used privacy-hostile design tricks to sneak expansive data-gobbling permissions past users. So, put simple, people clicked ‘agree’ without knowing exactly what they were agreeing to.

The Six4Three files back up the notion that Facebook was intentionally trying to mislead users.

In one email dated November 15, 2013, from Matt Scutari, manager privacy and public policy, suggests ways to prevent users from choosing to set a higher level of privacy protection, writing: “Matt is providing policy feedback on a Mark Z request that Product explore the possibility of making the Only Me audience setting unsticky. The goal of this change would be to help users avoid inadvertently posting to the Only Me audience. We are encouraging Product to explore other alternatives, such as more aggressive user education or removing stickiness for all audience settings.”

Another awkward trust issue for Facebook which the documents could stir up afresh relates to its repeat claim — including under questions from lawmakers — that it does not sell user data.

In one email from the cache — sent by Mark Zuckerberg, dated October 7, 2012 — the Facebook founder appears to be entertaining the idea of charging developers for “reading anything, including friends”.

Yet earlier this year, when he was asked by a US lawmaker how Facebook makes money, Zuckerberg replied: “Senator, we sell ads.”

He did not include a caveat that he had apparently personally entertained the idea of liberally selling access to user data.

Responding to the publication of the Six4Three documents, a Facebook spokesperson told us:

As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context. We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.

Zuckerberg has repeatedly refused to testify in person to the DCMS committee.

At its last public hearing — which was held in the form of a grand committee comprising representatives from nine international parliaments, all with burning questions for Facebook — the company sent its policy VP, Richard Allan, leaving an empty chair where Zuckerberg’s bum should be.

Powered by WPeMatico

D-Wave offers the first public access to a quantum computer

Posted by | api, computing, D-Wave Systems, Emerging-Technologies, Gadgets, Python, quantum computing, Quantum Mechanics, Startups, TC, vancouver | No Comments

Outside the crop of construction cranes that now dot Vancouver’s bright, downtown greenways, in a suburban business park that reminds you more of dentists and tax preparers, is a small office building belonging to D-Wave. This office — squat, angular and sun-dappled one recent cool Autumn morning — is unique in that it contains an infinite collection of parallel universes.

Founded in 1999 by Geordie Rose, D-Wave worked in relative obscurity on esoteric problems associated with quantum computing. When Rose was a PhD student at the University of British Columbia, he turned in an assignment that outlined a quantum computing company. His entrepreneurship teacher at the time, Haig Farris, found the young physicists ideas compelling enough to give him $1,000 to buy a computer and a printer to type up a business plan.

The company consulted with academics until 2005, when Rose and his team decided to focus on building usable quantum computers. The result, the Orion, launched in 2007, and was used to classify drug molecules and play Sodoku. The business now sells computers for up to $10 million to clients like Google, Microsoft and Northrop Grumman.

“We’ve been focused on making quantum computing practical since day one. In 2010 we started offering remote cloud access to customers and today, we have 100 early applications running on our computers (70 percent of which were built in the cloud),” said CEO Vern Brownell. “Through this work, our customers have told us it takes more than just access to real quantum hardware to benefit from quantum computing. In order to build a true quantum ecosystem, millions of developers need the access and tools to get started with quantum.”

Now their computers are simulating weather patterns and tsunamis, optimizing hotel ad displays, solving complex network problems and, thanks to a new, open-source platform, could help you ride the quantum wave of computer programming.

Inside the box

When I went to visit D-Wave they gave us unprecedented access to the inside of one of their quantum machines. The computers, which are about the size of a garden shed, have a control unit on the front that manages the temperature as well as queuing system to translate and communicate the problems sent in by users.

Inside the machine is a tube that, when fully operational, contains a small chip super-cooled to 0.015 Kelvin, or -459.643 degrees Fahrenheit or -273.135 degrees Celsius. The entire system looks like something out of the Death Star — a cylinder of pure data that the heroes must access by walking through a little door in the side of a jet-black cube.

It’s quite thrilling to see this odd little chip inside its super-cooled home. As the computer revolution maintained its predilection toward room-temperature chips, these odd and unique machines are a connection to an alternate timeline where physics is wrestled into submission in order to do some truly remarkable things.

And now anyone — from kids to PhDs to everyone in-between — can try it.

Into the ocean

Learning to program a quantum computer takes time. Because the processor doesn’t work like a classic universal computer, you have to train the chip to perform simple functions that your own cellphone can do in seconds. However, in some cases, researchers have found the chips can outperform classic computers by 3,600 times. This trade-off — the movement from the known to the unknown — is why D-Wave exposed their product to the world.

“We built Leap to give millions of developers access to quantum computing. We built the first quantum application environment so any software developer interested in quantum computing can start writing and running applications — you don’t need deep quantum knowledge to get started. If you know Python, you can build applications on Leap,” said Brownell.

To get started on the road to quantum computing, D-Wave built the Leap platform. The Leap is an open-source toolkit for developers. When you sign up you receive one minute’s worth of quantum processing unit time which, given that most problems run in milliseconds, is more than enough to begin experimenting. A queue manager lines up your code and runs it in the order received and the answers are spit out almost instantly.

You can code on the QPU with Python or via Jupiter notebooks, and it allows you to connect to the QPU with an API token. After writing your code, you can send commands directly to the QPU and then output the results. The programs are currently pretty esoteric and require a basic knowledge of quantum programming but, it should be remembered, classic computer programming was once daunting to the average user.

I downloaded and ran most of the demonstrations without a hitch. These demonstrations — factoring programs, network generators and the like — essentially turned the concepts of classical programming into quantum questions. Instead of iterating through a list of factors, for example, the quantum computer creates a “parallel universe” of answers and then collapses each one until it finds the right answer. If this sounds odd it’s because it is. The researchers at D-Wave argue all the time about how to imagine a quantum computer’s various processes. One camp sees the physical implementation of a quantum computer to be simply a faster methodology for rendering answers. The other camp, itself aligned with Professor David Deutsch’s ideas presented in The Beginning of Infinity, sees the sheer number of possible permutations a quantum computer can traverse as evidence of parallel universes.

What does the code look like? It’s hard to read without understanding the basics, a fact that D-Wave engineers factored for in offering online documentation. For example, below is most of the factoring code for one of their demo programs, a bit of code that can be reduced to about five lines on a classical computer. However, when this function uses a quantum processor, the entire process takes milliseconds versus minutes or hours.

Classical

# Python Program to find the factors of a number

define a function

def print_factors(x):

This function takes a number and prints the factors

print(“The factors of”,x,”are:”)
for i in range(1, x + 1):
if x % i == 0:
print(i)

change this value for a different result.

num = 320

uncomment the following line to take input from the user

#num = int(input(“Enter a number: “))

print_factors(num)

Quantum

@qpu_ha
def factor(P, use_saved_embedding=True):

####################################################################################################

get circuit

####################################################################################################

construction_start_time = time.time()

validate_input(P, range(2 ** 6))

get constraint satisfaction problem

csp = dbc.factories.multiplication_circuit(3)

get binary quadratic model

bqm = dbc.stitch(csp, min_classical_gap=.1)

we know that multiplication_circuit() has created these variables

p_vars = [‘p0’, ‘p1’, ‘p2’, ‘p3’, ‘p4’, ‘p5’]

convert P from decimal to binary

fixed_variables = dict(zip(reversed(p_vars), “{:06b}”.format(P)))
fixed_variables = {var: int(x) for(var, x) in fixed_variables.items()}

fix product qubits

for var, value in fixed_variables.items():
bqm.fix_variable(var, value)

log.debug(‘bqm construction time: %s’, time.time() – construction_start_time)

####################################################################################################

run problem

####################################################################################################

sample_time = time.time()

get QPU sampler

sampler = DWaveSampler(solver_features=dict(online=True, name=’DW_2000Q.*’))
_, target_edgelist, target_adjacency = sampler.structure

if use_saved_embedding:

load a pre-calculated embedding

from factoring.embedding import embeddings
embedding = embeddings[sampler.solver.id] else:

get the embedding

embedding = minorminer.find_embedding(bqm.quadratic, target_edgelist)
if bqm and not embedding:
raise ValueError(“no embedding found”)

apply the embedding to the given problem to map it to the sampler

bqm_embedded = dimod.embed_bqm(bqm, embedding, target_adjacency, 3.0)

draw samples from the QPU

kwargs = {}
if ‘num_reads’ in sampler.parameters:
kwargs[‘num_reads’] = 50
if ‘answer_mode’ in sampler.parameters:
kwargs[‘answer_mode’] = ‘histogram’
response = sampler.sample(bqm_embedded, **kwargs)

convert back to the original problem space

response = dimod.unembed_response(response, embedding, source_bqm=bqm)

sampler.client.close()

log.debug(’embedding and sampling time: %s’, time.time() – sample_time)

 

“The industry is at an inflection point and we’ve moved beyond the theoretical, and into the practical era of quantum applications. It’s time to open this up to more smart, curious developers so they can build the first quantum killer app. Leap’s combination of immediate access to live quantum computers, along with tools, resources, and a community, will fuel that,” said Brownell. “For Leap’s future, we see millions of developers using this to share ideas, learn from each other and contribute open-source code. It’s that kind of collaborative developer community that we think will lead us to the first quantum killer app.”

The folks at D-Wave created a number of tutorials as well as a forum where users can learn and ask questions. The entire project is truly the first of its kind and promises unprecedented access to what amounts to the foreseeable future of computing. I’ve seen lots of technology over the years, and nothing quite replicated the strange frisson associated with plugging into a quantum computer. Like the teletype and green-screen terminals used by the early hackers like Bill Gates and Steve Wozniak, D-Wave has opened up a strange new world. How we explore it us up to us.

Powered by WPeMatico

Microsoft Azure bets big on IoT

Posted by | ambient intelligence, Android, api, Azure, Azure IoT, cloud computing, Google, Internet of Things, IoT, Java, Microsoft, Microsoft Ignite 2018, TC | No Comments

At its Ignite conference in Orlando, Florida, Microsoft today announced a plethora of new Internet of Things-focused updates to its Azure cloud computing platform. It’s no secret that the amount of data generated by IoT devices is a boon to cloud computing services like Azure — and Microsoft is definitely aiming to capitalize on this (and its existing relationships with companies in this space).

Some of today’s announcements are relatively minor. Azure IoT Central, the company’s solution for helping you get started with IoT, is now generally available, for example, and there are updates to Microsoft’s IoT provisioning service, IoT hub message routing tools and Map Control API.

Microsoft also today announced that the Azure IoT platform will now support Google’s Android and Android Things platform via its Java SDK.

What’s more interesting, though, is the new services. The highlight here is probably the launch of Azure Digital Twins. Using this new service, enterprises can now build their own digital models of any physical environment.

Think of it as the virtual counterpart to a real-world IoT deployment — and as the IoT deployment in the real world changes, so does the digital model. It will provide developers with a full view of all the devices they have deployed and allows them to run advanced analytics and test scenarios as needed without having to make changes to the actual physical deployment.

“As the world enters the next wave of innovation in IoT where the connected objects such as buildings, equipment or factory floors need to be understood in the context of their environments, Azure Digital Twins provides a complete picture of the relationships and processes that connect people, places and devices,” the company explains in today’s announcement.

Azure Digital Twins will launch into preview on October 15.

The other major announcement is that Azure Sphere, Microsoft’s play for getting into small connected microcontroller devices, is now in public preview, with development kits shipping to developers now. For Azure Sphere, Microsoft built its own Linux-based kernel, but the focus here is obviously on selling services around it, not getting licensing fees. Every year, hardware companies ship nine billion of these small chips and few of them are easily updated and hence prone to security issues once they are out in the wild. Azure Sphere aims to offer a combination of cloud-based security, a secure OS and a certified microcontroller to remedy this situation.

Microsoft also notes that Azure IoT Edge, its fully managed service for delivering Azure services, custom logic and AI models to the edge, is getting a few updates, too, including the ability to submit third-party IoT Edge modules for certification and inclusion in the Azure Marketplace. It’s also about to launch the public preview of IoT Edge extended offline for those kinds of use cases where an IoT device goes offline for — you guessed it — and extended period.

more Microsoft Ignite 2018 coverage

Powered by WPeMatico

Anaxi brings more visibility to the development process

Posted by | Anaxi, Android, api, Apple, Atlassian, Developer, Docker, Enterprise, GitHub, software development, TC, version control | No Comments

Anaxi‘s mission is to bring more transparency to the software development process. The tool, which is now live for iOS, with web and Android versions planned for the near future, connects to GitHub to give you actionable insights about the state of your projects and manage your projects and issues. Support for Atlassian’s Jira is also in the works.

The new company was founded by former Apple engineering manager and Docker EVP of product development Marc Verstaen and former CodinGame CEO John Lafleur. Unsurprisingly, this new tool is all about fixing the issues these two have seen in their daily lives as developers.

“I’ve been doing software for 40 years,” Verstaen told me.” And every time is the same. You start with a small team and it’s fine. Then you grow and you don’t know what’s going on. It’s a black box.” While the rest of the business world now focuses on data and analytics, software development never quite reached that point. Verstaen argues that this was acceptable until 10 or 15 years ago because only software companies were doing software. But now that every company is becoming a software company, that’s not acceptable anymore.

Using Anaxi, you can easily see all issue reports and pull requests from your GitHub repositories, both public and private. But you also get visual status indicators that tell you when a project has too many blockers, for example, as well as the ability to define your own labels. You also can define due dates for issues.

One interesting aspect of Anaxi is that it doesn’t store all of this information on your phone or on a proprietary server. Instead, it only caches as little information as necessary (including your handles) and then pulls the rest of the information from GitHub as needed. That cache is encrypted on the phone, but for the most part, Anaxi simply relies on the GitHub API to pull in data when needed. There’s a bit of a trade-off here in terms of speed, but Verstaen noted that this also means you always get the most recent data and that GitHub’s API is quite fast and easy to work with.

The service is currently available for free. The company plans to introduce pricing plans in the future, with prices based on the number of developers that use the product inside a company.

Powered by WPeMatico

Apple is introducing a health record API for developers this fall

Posted by | api, Apple, Health, iOS at WWDC 2018, Mobile, WWDC 2018 | No Comments

For all of the news that Apple managed to cram into today’s 135-minute(!) WWDC keynote this morning, the event was actually pretty light on health care updates. It was a bit of a surprise, given how much of a focus the company has put on the space at past events.

Apple did announce an interesting health tidbit today on its website today — something that likely just got squeezed out of keynote the event late in the game. Starting this fall, the company will open up health record data to third-party iOS apps through a new API. The feature will make it possible for users to share health data from more than 500 hospitals/clinics with third-party apps.

There are, clearly, some serious concerns around sharing this sort of sensitive data.

The company is addressing this in a couple of ways. For starters, it’s all opt-in, obviously. Your personal information won’t be shared with any apps unless you explicitly allow it to be. The health records are also encrypted and stored locally on the phone.

“When consumers choose to share their health record data with trusted apps,” according to Apple, “the data flows directly from HealthKit to the third-party app and is not sent to Apple’s servers.”

As far as specific applications for such data, Apple points to medication tracking as one of the key case uses. Medisafe will be among the first to use the information in this way, letting users import prescription lists, in order to push reminders, without having to manually enter all of that information in the app.

Disease management is another possibility, for something along the lines of a diabetes app, which customizes recommendations based on health information. There’s also some applications for broader medical research here, providing anonymized health data for laboratory purposes.

Powered by WPeMatico

Oracle CEO claims it discounted Java by 97.5% to beat out Android on Amazon’s Paperwhite

Posted by | Android, api, APIs, Google, Java, lawsuits, Mobile, oracle, Sun Microsystems, TC | No Comments

oracle v google Oracle and Google continue to fight it out in a retrial over $9 billion that Oracle claims Google owes it for using its Java code in its popular Android mobile platform. And in the process, we’re also hearing details about other companies that may not have been known before. Today it was the turn of Amazon, which Oracle today said ran Java in its Kindle Paperwhite, but only after… Read More

Powered by WPeMatico

Google launches new services for Android game developers

Posted by | api, Developer, Google, Internet advertising, Mobile, Steam, TC, world wide web, YouTube | No Comments

google_play_developers Google today announced a number of new services for game developers at its annual Developer Day at the Game Developers Conference. They include tools for managing virtual goods and currencies, the launch of the Video Recording API so developers can make it easier for players to stream and share videos to YouTube, and a new ad type that allows new players to trial a game for 10 minutes right… Read More

Powered by WPeMatico

Instagram kills newly launched ‘Being’ app, which saw 50K downloads its first week

Posted by | api, Apps, being, DEADPOOL, Developer, developers, instagram, Mobile, pressto, Startups | No Comments

being In case there was any doubt where Instagram was drawing the line when it came to its shutdown of third-party feed reading apps, it appears that its decision to revoke API access doesn’t just extend to those that offer an alternative means of browsing the photo-sharing service. It also reaches apps that offer an expansion of what you can do with Instagram – for example, the… Read More

Powered by WPeMatico