Apple Acquired Machine Learning Startup Inductiv for Siri Improvements

Apple recently purchased Ontario-based machine learning startup Inductiv for the purpose of improving Siri, reports Bloomberg.


Apple confirmed the purchase with one of its typical acquisition statements: “Apple buys smaller technology companies from time to time and we generally do not discuss our purpose or plans.”

Inductiv’s engineering team joined Apple after the acquisition to work on ‌Siri‌, machine learning, and data science at the company. Prior to the purchase, Inductiv created technology that used AI to automate identifying and correcting errors in data for machine learning purposes. From Bloomberg:

Having clean data is important for machine learning, a popular and powerful type of AI that helps software improve with less human invention.

The work falls under the category of data science, a key element of Apple’s broader machine-learning strategy.

Apple’s Siri chief John Giannandrea, who is also responsible for machine learning, has been upgrading the underlying technology that ‌Siri‌ is based on with acquisitions of companies like Inductiv.

Apple has purchased multiple AI and machine learning-related companies over the course of the last several years, such as Voysis, Xnor.ai, Turi, Perceptio, Tuplejump, and more.

Temp Mails (https://tempemail.co/) is a new free temporary email addresses service. This service provide you random 10 minutes emails addresses. It is also known by names like: temporary mail, disposable mail, throwaway email, one time mail, anonymous email address… All emails received by Tempmail servers are displayed automatically in your online browser inbox.

RSA 2020 – Is your machine learning/quantum computer lying to you? – 10 minute mail

And how would you know if the algorithm was tampered with?

Quantum computing is the new heir apparent to AI in terms of number of wild security claims. Several years ago, you could add AI to a marketing sentence and then sell products of vacuous import. That crown is oozing toward quantum. Throw in encryption and you get a raise, maybe even an IPO. At RSA, however, there’s a real quantum computing group working on real problems – the problem of whether “random” numbers, used as input for cryptographic algorithms generated by quantum computing – can be forged.

While other quantum computing systems go to the trouble of generating a pretty darned random number for use in cryptographic keys (which is very definitely a good thing), they’re not very good at attesting to whether the resulting number has been tampered with. Mess with the “random” number and you can rig the cryptographic keys to be broken.

Enter Cambridge Quantum Computing. At RSA they’ve trotted out a machine that generates a sort of unalterable proof for each generated number in the form of a horizontally and vertically polarized photon pair – modify one and you can tell. Quantum entanglement is such that you can do that. This means you can get certifiably random keys. You don’t have to trust anyone, so a first provably Zero Trust Security device is a welcome addition here.

How do you check? Using the Bell Inequality method for verifying random keys that was thought up in 1964. Even if you don’t know what that is, the point is there’s a way to verify integrity – something that would otherwise be more elusive.

Speaking of lying to you, how would you know if those latest Machine Learning algorithms were tampered with?

There is a concept called the “the right to an explanation” that exists in the real world, but perhaps not in marketing materials generated by vaporware companies. This means if we ask a ML model to give us an answer, how would we know if it’s skewed? Who’s checking? How would we know if the inputs or outputs have been tampered with?

If an ML machine determines whether you’re a good risk for a home mortgage, wouldn’t you want to know why it turned you down? Something besides “it’s very complicated”?

ML models have had to wrestle with failed outcomes in low-dimensional data like imagery, where operators could spot where the models got it wrong by looking at images and checking the results visually for anomalies. But in high-dimensional data, like typically complex data modeling in the real world, it’s all but impossible for an operator to “see” where things might be askew.

And this is the problem: Determining whether “the model made me do it” or someone’s been feeding the model bad data – intentionally or just through bad methodology – basically rigging the model to favor an outcome the data doesn’t, or can’t yet, support.

Typically, the response has been to feed the model more data and let the magic machine churn. Larger volumes of training data are indeed useful, but not in the case where someone is trying to rig the outcome.

In the same manner Cambridge Quantum seeks to provide a novel “proof” of an unaltered random number, machine learning could use some kind of proof besides “it’s very complicated.” We can handle complicated, but when it matters most, that’s not a very satisfying answer, if it’s an answer at all.

In its absence, who is responsible when the machine gets it wrong and sends a physical missile off course? “The missile’s guidance AI made it veer” is not an answer humans are likely to endure for long. Neither should we blindly trust in the machines that will soon be guiding many of the important decisions that will affect our lives. And while in one session here at RSA, a working group was mentioned about detecting bias in AI systems, at this point there’s little concrete. We hope there will be soon.



Cameron Camp


Temp Mails (https://tempemail.co/) is a new free temporary email addresses service. This service provide you random 10 minutes emails addresses. It is also known by names like: temporary mail, disposable mail, throwaway email, one time mail, anonymous email address… All emails received by Tempmail servers are displayed automatically in your online browser inbox.