TPG takeover gets tick from iiNet; 1bn Android phones at risk; Hawking, Woz warn against AI


By Andrew Collins
Thursday, 30 July, 2015


TPG takeover gets tick from iiNet; 1bn Android phones at risk; Hawking, Woz warn against AI

TPG’s proposed takeover of iiNet has been approved by iiNet shareholders with an overwhelming majority voting in favour of the acquisition — reportedly valued at $1.56 billion — to proceed.

The vote took place at a meeting held in Perth on Monday morning, with more than 99% of shareholders casting a vote, either in person or by proxy. Only 23 of 2884 shareholders abstained from voting.

Of those who voted, a whopping 89.93% of shareholders (representing 95.09% votes) voted for the takeover, with 10.07% of shareholders (4.91% of votes) voting against the deal.

But while the acquisition may have majority iiNet shareholder approval, it’s still subject to approval by the ACCC and by the Australian Federal Court.

ACCC approval is not necessarily guaranteed. In June the organisation released its preliminary view on the acquisition, saying that the purchase “may lead to a substantial lessening of competition, potentially resulting in higher prices and/or degradation of the non-price offers available in the [market for the supply of retail fixed broadband services], including customer service”.

However, the ACCC emphasised at the time that this view was preliminary and didn’t necessarily represent its final decision.

iiNet expects the ACCC’s final decision to be released on 20 August. A Federal Court hearing is scheduled for 21 August.

Android phones at risk

Almost one billion Android phones are vulnerable to a new attack that can be instigated by a single text message, Wired has reported.

According to Wired, the vulnerability allows an attacker to execute remote code on a target Android phone. An attack capitalising on the vulnerability apparently involves sending a video containing a virus to a target phone via MMS.

The Guardian reported that the vulnerability could allow an attacker to potentially read and delete data, or spy on a user through the device’s camera and microphone.

The vulnerability was reportedly discovered by Joshua J Drake, Zimperium zLabs vice president of platform research and exploitation.

According to the BBC, Zimperium estimated that 950 million devices were affected by the vulnerability.

“These vulnerabilities are extremely dangerous because they do not require that the victim take any action to be exploited. Unlike spear-phishing, where the victim needs to open a PDF file or a link sent by the attacker, this vulnerability can be triggered while you sleep,” Wired quoted Zimperium as saying.

“Before you wake up, the attacker will remove any signs of the device being compromised and you will continue your day as usual — with a trojaned phone.”

According to Wired, Drake said that Google has sent out a patch for the vulnerability to its partners, but that most manufacturers have not pushed a fix to their customers.

Hawking, Wozniak issue AI warning

Leading scientists and tech industry figures have signed an open letter that calls for a ban on autonomous weapons powered by artificial intelligence.

The letter was organised by the Future of Life Institute, which, according to its website, is a “research and outreach organization working to mitigate existential risks facing humanity” including “potential risks from the development of human-level artificial intelligence”.

The letter currently has more than 12,000 signatories, including renowned physicist Stephen Hawking, Apple co-founder Steve Wozniak and philosopher Noam Chomsky.

“Autonomous weapons select and engage targets without human intervention,” the letter said. “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The letter warns of a “global AI arms race”.

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,” it said.

“It will only be a matter of time until [AI-powered autonomous weapons] appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”

The letter suggested that instead of being used as part of “new tools for killing people”, AI could instead be used to make battlefields safer for humans, especially civilians.

“[W]e believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

The letter is not long and, given the people who have endorsed its message, is well worth a read. It’s available in its entirety here.

Image courtesy TPG

Related Articles

Making sure your conversational AI measures up

Measuring the quality of an AI bot and improving on it incrementally is key to helping businesses...

Digital experience is the new boardroom metric

Business leaders are demanding total IT-business alignment as digital experience becomes a key...

Data quality is the key to generative AI success

The success of generative AI projects is strongly dependent on the quality of the data the models...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd