Share this postRobots & StartupsMore interesting things in robotics in 2022! - Issue #85Copy linkFacebookEmailNotesMoreMore interesting things in robotics in 2022! - Issue #85Andra KeayOct 16, 2022Share this postRobots & StartupsMore interesting things in robotics in 2022! - Issue #85Copy linkFacebookEmailNotesMoreSharenullNow it's on.Every American executive and engineer working in China's semiconductor manufacturing industry resigned on Friday, paralyzing Chinese manufacturing overnight.Find the latest trade statistics and economic complexity data for China.nullEventbrite - SVR @ Circuit Launch presents Xoxe's 1st Birthday and Launch of Joanne Pransky's Museum of Social Robots - Monday, October 17, 2022 at Circuit Launch, Oakland, CA. Find event and ticket information.nullAzeem Azhar Oct 10 43 2Wow. DeepMind announced AlphaTensor, a neural network which has unlocked novel mathematical algorithms. Using an extension to AlphaZero (its general games playing engine), AlphaTensor has improved a key piece of fundamental mathematics: matrix multiplication. Wow. I said that before, yes. The real wow is that matrix multiplication is the heart of modern deep learning. A typical neural network will conduct billions, trillions of such multiplications. In fact, the demand for matrix multiplications has increased polynomially in the past decade (and one reason why graphics processing units have taken off as the table computing system for AI.)Wow. Here is why it matters. AlphaTensor has improved a foundational aspect of what it is to build AI systems such as… AlphaTensor. Foundational. It’s improved on a 53-year piece of maths, Strassen Algorithm. The approach unveiled by AlphaTensor is 10-20% faster than previous methods according to DeepMind’s boss, Demis Hassabis. (Strassen’s method, the previous best, required two 4-by-4 matrices and needed 49 steps. AlphaTensor has found a 47-step method.)The habits of highly effective AI Matrices are at the heart of artificial neural networks. So, is this potentially a case of recursive self-improvement — that is, making improvements in the ability to make improvements? Recursive self-improvement could lead to the “intelligence explosion” as proposed by mathematician Ian Good. In 1965 Good described an ultraintelligent machine: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.Thus the first ultraintelligent machine is the last invention that man need ever make. Let’s explore. Technologies often lay the foundations for being able to improve themselves. In recent times, we’ve built better silicon chips because we used computers (with worse silicon chips in them) to design the next generation. The first electronic design automation tools for semiconductors emerged in 1966, less than ten years after the first chips.Now EDA (and its successors such as Google’s PRIME) are a staple in semiconductor design. Modern deep neural nets are so complex that automated, algorithmic approaches are used to fine-tune their billions of hyperparameters. So, is AlphaTensor an example of recursive self-improvement? Or is it a more straightforward application of technology to improve an existing process? And how would we know?I don’t believe AlphaTensor is proof-positive of recursive self-improvement. But nor is it a straightforward application of the technology (like EDA) to improve the construction of neural nets. Hence the three “wows” at the start of this essay.Why the wows? DeepMind has found something in foundational maths. And it has done so by building upon AlphaZero, an agent that has shown superhuman performance on board games, like chess, Go and shogi, and this work shows the journey of AlphaZero from playing games to tackling unsolved mathematical problems for the first time. And the act of multiplying a matrix is a rather common one in our modern world. Beyond deep learning, matrix multiplications are used throughout engineering of all flavours. That is pretty impressive.But more than that, they have found a new way of discovering algorithms. Algorithms are at the heart of pretty much everything we build today, and novel approaches could improve efficiencies (which, in turn, means lower cost and lower resource usage.) I want to stress the “novel”. Because “novel” is rare in machine learning.Let’s face it. Lots of the breakthroughs in machine learning (and software development in general) have been about exploitation: optimisation, refinement and implementation. It is about climbing the highest hill you see, not wandering the terrain looking for an even higher one. But much of DeepMind’s work seems to tilt towards the idea of exploration.Exploration is riskier: it is about search, variation, discovery, novelty. Deep Blue’s defeat of which beating Kasparov was an exploitation mode, brute forcing every part of the search space. AlphaGo’s famous Move 37, where the neural network came up with a sublime and unimaginable move, was a demonstration of exploration at work. And the final wow, praising the exploratory nature, is also a commendation on the internal culture at DeepMind, to provide the freedom to explore these avenues. The explosion But it isn’t the runway explosion that Ian Good describes above. What we have is some kind of evidence of fundamental improvements in the ways in which we build ML machines.These improvements are being discovered by humans using these machines. That certainly accelerates the pace. But that is not the only thing accelerating the speed of ML discovery. The ecosystem of academics, developers, tinkerers together with the economic incentives is great a large economy-wide machine that is pushing the frontiers of AI machines.And we’re increasingly applying those machines to build the next generation - and doing so at, seemingly, faster pace. But can the machine build itself? Not yet.PreviousNext