Sometimes the smartest, most innovative thinkers make fantastically wrong predictions. In 1878, electrical engineer Sir William Preece said the telephone would never put “messenger boys” out of business.
In 1946, 20th Century Fox exec Darryl Zanuck claimed people would get bored of the TV after its first six months on the market. In 1977, engineer/entrepreneur Ken Olsen said, “There is no reason anyone would want a computer in their home.” (Except that he was referring to a science-fiction version of today’s Internet of Things [IoT] home devices, which he may also be wrong about.)
Looking to 2017 and beyond, true computer-aided design (CAD) and computer-aided manufacturing (CAM) will be a reality. Until now, computers never participated in people’s thought processes. They just patiently waited for instructions—never pushing people beyond their creative boundaries.
But that’s changing. Here are five design and technology predictions to look forward to.
1. Virtual Reality Will Make a Big Impact on the Construction Industry
Virtual reality (VR) as an aid to architects is a growing trend, but the most profound effect may be on construction. VR gives construction professionals a more faithful representation than what they typically get on text-based schedules (like Gantt charts) and 3-D graphical data (like BIM models).
With VR, general contractors will be able to virtually walk onto a job site and see what it will look like the following week. Once immersed in the data, they will be able to point out issues, resolve differences and coordinate changes—before the future site is real. Workers will also be able to do practice runs.
The implications of this shift—transporting construction managers and workers into a different way of relating to their data—will be massive savings of time and money, as well as preventing mistakes and accidents.
2. Machine Learning Will Take Product-Design Creativity to a New Level
The acceleration of machine learning is exponential. The same way that scientists have stimulated the human brain in a synesthetic sense to trigger false memories, it’s possible to push “neurons” inside software to discover objects that have never been invented.
An example is Autodesk’s Design Graph project, which mines immense amounts of data, discerns relationships among parts (such as gears, bolts, and screws), groups like things together by shape, and makes relevant recommendations. As this system was trained, it developed recognition skills similar to the way the human brain works. Just as people can differentiate between a dog and a cat, that cognition exists inside the software.
For example, to stimulate a set of neurons to make a chair and another set to make an airplane, the user can slide between the two, stimulating a “false memory” and watching the object morph between chair and plane. What’s interesting isn’t so much those morphologically dissimilar examples. It’s that looking in the neighborhood of existing objects and finding the unchartered spaces in between might point to new product opportunities.
3. Sensing Robots Will Make Manufacturing Faster and More Accurate
Consumer IoT devices can be ridiculously overwrought. Does a toaster need to be connected to a smartphone when a $35 Black+Decker will suffice? In the near future, IoT will make a much bigger impact in industrial robotics.
Up until now, robots have been completely blind, executing the same rote-style Etch A Sketch interactions—regardless of people, other robots or the work piece in front of them. Going forward, manufacturing robots need to be more flexible and adaptable to different situations.
Important work is underway with sensing robots—such as Madeline Gannon’s Quipt project—to help robots see their surroundings and alter their programs to avoid repeating the same dumb mistakes. The Autodesk Applied Research Lab is also working on a large-scale additive robot with goals rather than tasks. At every moment while it’s printing, it measures how well it’s accomplishing the goal and can course correct if needed.
Another robot at the lab can pick up a novel thing it hasn’t experienced before, form an “opinion,” and determine the most grippable part of the object. If it fails, it learns from its mistakes. The next step could be replacing the eye of the robot with a rendered scene, such as a LEGO on a park bench, a LEGO among other LEGOs or a LEGO on a cat, instead of the robot using its literal camera eyes.
Operating with the computer and the machine-learning system, the robot imagines these different scenarios so it doesn’t have to act them out in the real world. It’s possible to do these training scenarios in parallel and at scale, so once the robot learns to pick up a LEGO, it can accelerate through an offline process to learn tens of millions of different objects simultaneously. Then it’ll know how to pick up all things.
4. Generative Design and Simulation Will Predict Manufacturability
When playing Battleship, a player might guess C6, get a hit and then speculate that C4 might also be a hit. In time, through trial and error, he might get lucky and sink his opponent’s battleship.
Generative design is like turning the game around and showing the player where all the ships are. With simulation and generative design, users will be able to see all options, and all of them will be manufacturable because they are vetted by the computer first.
For example, Airbus used generative design to create an airplane partition (which separates the crew from passengers) that is 45 percent lighter than conventional partitions. Of the 10,000-plus design options created through generative design, the Airbus team chose a couple to test comprehensively using simulation software. The result was a structurally sound yet lightweight partition that could be printed by three additive manufacturing systems—without fail.
The process didn’t just save time and money. If the thousands of new A320 planes currently on order have these partitions installed, it could cut CO2 emissions by hundreds of thousands of metric tons every year.
5. Crowdsourced Data and Generative Design Will Create Happier Workplaces
Generative design is first going to hit in the manufacturing industry, where the cycle times are short from inception to product. But there’s also space for architects to use generative design to explore goals, constraints and outcomes.
One example is how Autodesk and architecture studio The Living approached the MaRS building: first surveying employees to understand their needs for collaboration, daylight, privacy and more. Based on that data, the generative-design tool created multiple plan options from thousands of configurations.
Building orientation, fenestration and shading and number of floors: Those aspects and their implications have all been explored. But what improves employee productivity inside a building? The answer lies in mapping the survey data showing human preferences (which vary from person to person) with the calculations performed by the computer.
This approach means investigating a highly dynamic situation with multiple objectives—and that’s where techniques of generative design perform best over the traditional, “Let’s pick the first thing that works” scenario.