Both developing babies and elderly adults share a common characteristic: the many cells making up their bodies are always on the move. As we humans commute to work, cells migrate through the body to get their jobs done. Biologists have long struggled to quantify the movement and changing morphology of cells through time, but now, scientists at the Okinawa Institute of Science and Technology Graduate University (OIST) have devised an elegant tool to do just that. Using machine learning, the researchers designed a software to analyze microscopic snapshots of migrating cells. They named the software Usiigaci, a Ryukyuan word that refers to tracing the outlines of objects, as the innovative tool detects the changing outlines of individual cells. Usiigaci, described in a paper published March 13, 2019 in SoftwareX, is now available online for anyone to use, along with a video tutorial explaining the software. In the womb, a baby’s cells migrate to precise locations so that each arm, leg, and organ grows in its proper place. Our immune cells race through the body to mend wounds after injury. Cancerous cells metastasize by traveling through the body, spreading tumors to new tissues. To test the efficacy of new medicines, drug developers track the movement of cells before and after treatment. The Usiigaci software finds applications in all these areas of study and more. “This is an all-in-one solution to get us from raw images to quantitative data on cell migration,” said Hsieh-Fu Tsai, first author of the study. Tsai is a graduate student and a Japan Society for the Promotion of Science (JSPS) DC1 research fellow in the OIST Micro/Bio/Nanofluidics Unit, led by Prof. Amy Shen. “Our software is at least 100 times faster than manual methods, which are currently the gold-standard for these types of experiments because computers are not yet powerful enough.” “We’re hoping this software can become quite useful for the scientific community,” said Prof. Amy Shen, principal investigator of the unit and senior author of the study. “For any biological study or drug screening that requires you to track cellular responses to different stimuli, you can use this software.” Usiigaci in Action The Micro/Bio/Nanofluidics Unit has devised a machine learning software to segment, track, and analyze the movement of migrating cells. Named Usiigaci, a Ryukyuan word that means “tracing,” the software significantly outperforms existing programs and has many applications across biology and medicine. Machine Learning Makes Usiigaci Adaptable In order to observe cells under the microscope, scientists often steep them in dye or tweak their genes to make them glow in eye-popping colors. But coloring cells alters their movement, which in turn skews the experimental results. Some scientists attempt to study cell migration without the help of fluorescent tags, using so-called “label-free” methods, but end up running into a different problem; Label-free cells blend into the background of microscopic images, making them incredibly difficult to analyze with existing computer software. Usiigaci hops this hurdle by allowing scientists to train the software over time. Biologists act as teachers, providing the software new images to study so that it can come to recognize one cell from the next. A fast learner, the program quickly adapts to new sets of data and can easily track the movement of single cells, even if they’re crammed together like commuters on the Tokyo metro. “Most software...cannot tell cells in high-density apart; basically, they’re segmenting into a glob,” said Tsai. “With our software, we can segment correctly even if cells are touching. We can actually do single-cell tracking throughout the entire experiment.” Usiigaci is currently the fastest software capable of tracking the movement of label-free cells at single-cell resolution on a personal laptop. Prof. Amy Shen (left) and Hsieh-Fu Tsai of the Micro/Bio/Nanofluidics Unit stand beside the microscope that they use to capture images of migrating cells. As an initial step to pursue his thesis project, Tsai designed a software, called Usiigaci, to analyze these images and quantify the movement and changing morphology of cells through time. Software Mimics the Human Brain The researchers designed Usiigaci to process images as if it were a simplified human brain. The strategy enables the software to trace the outlines of individual cells, monitor their movement moment to moment, and transform that information into crunchable numbers. The program is built around a machine learning infrastructure known as a “convolutional neural network.” roughly based on how brain cells work together to process incoming information from the outside world. When our eyes capture light from the environment, they call on neurons to analyze those signals and figure out what we’re looking at and where it is in space. The neurons first sketch out the scene in broad strokes then pass the information on to the next set of cells, progressively rendering the image in more and more detail. Neural networks work similarly, except each “neuron” is a collection of code rather than a physical cell. This design grants Usiigaci its accuracy and adaptability. Looking forward, the researchers aim to develop neural networks to identify different components within cells, rather than just their outlines. With these tools in hand, scientists could easily assess whether a cell is healthy or diseased, young or old, derived from one genetic lineage or another. Like Usiigaci, these programs would have utility in fundamental biology, biotechnology research and beyond.
As gesture control represents a seamless interface between humans and machines, more and more machines, robots and devices are able to respond to gestural cues. Researchers at Fraunhofer IOF in Jena are raising human-machine interaction to a new level: the high-speed 3D measurement and sensor technology developed in the 3D-LivingLab research project (see “3D-LivingLab” box) enables them to capture and interprete even complex movements – and does so in real time. At Hannover Messe 2019, the research team is demonstrating its gesture-based human-machine interaction technology using the example of a wall made up of 150 spheres, which copies in 3D every head, arm and hand movement of a person standing in front of it. The wall of spheres effectively imitates the body movements with contact-free 3D reactions in real time, free from irritating time lags. The wall of spheres was created as part of the “3D-LivingLab“ project. Workflows are greatly simplified The system is made up of several modules: 3D sensor, 3D data processing and image fusion as well as the actuator system itself comprising 150 individual actuators. “The wall of spheres is not only a great toy, it also represents cutting-edge technology. Real-time 3D capture and interpretation of multiple gestures without tracking sensors can radically simplify workflows – from production scenarios to health and safety,” says Dr. Peter Kühmstedt, scientist and group leader at Fraunhofer IOF. The demonstrator system responds to the behavior of people, captures complex movements such as gestures and physical actions, and gives real-time feedback through a technical actuator system that converts electrical signals into movement on the wall of spheres. It is the person’s posture that controls the actuators. Specially developed algorithms enable human 3D movements to trigger control of the actuators, thus causing the spheres to move. “We are demonstrating very rapid measurement technology – the data is captured by a new generation of 3D sensors –, very rapid low-latency processing – the data is interpreted and converted immediately – and very rapid reactions in real time. According to the calculation results, the wall of spheres immediately mirrors the movement of the person in front of it,” says the researcher. In production environments, for example, the technology could be used to monitor a worker who is interacting with a robot and handing it parts. It could also be transferred to other application fields, such as health and safety, where it can make processes safer and more efficient. Other conceivable applications for the 3D sensor technology and the interaction components are in assembly assistance and quality control systems. Moreover, they also qualify for the monitoring of biometric access points.
Many people use Alexa, Siri and other similar voice assistants on a daily basis, dipping in to access the latest news, make use of voice navigation or simply stream their favorite songs. Voice assistants are an intuitive way to interact with technology, an effective way of delivering services and imparting information. They are not just handy everyday helpers, however; they present companies and business with a huge opportunity to simplify human-machine interaction and offer entirely new services to their industry customers. Focus on companies Researchers at Fraunhofer IAIS in Sankt Augustin develop just these sorts of voice interaction systems for use in a wide variety of applications, including manufacturing and the automotive and medical sectors. While Alexa, Siri and the like are aimed at individual consumers, the research team at Fraunhofer IAIS uses the latest techniques in machine learning, question answering and knowledge graphs to address the specific needs and challenges of business. “In manufacturing, for instance, we are seeing more and more robots equipped with voice assistants, which the worker can then operate and train using voice and gestures,” says Prof. Dr. Jens Lehmann, Lead Scientist at Fraunhofer IAIS. Prof. Lehmann and his team at Fraunhofer IAIS specialize in dialog systems catering to domain-specific knowledge and trained for specific applications. At the Hannover Messe, they will be showcasing a voice assistant integrated into a VW Tiguan. Wearing a headset and virtual reality glasses, drivers will be taken on a virtual tour of Berlin while the interactive system answers questions about the surroundings such as: What’s that building on the left-hand side? What’s it known for? When was it built? Who built it? The system also supports supplementary questions such as “Where does the architect come from?” or “Tell me more about him!” Domain-specific knowledge answering complex questions The Hannover Messe showcase is a collaboration between the Fraunhofer Cluster of Excellence Cognitive Internet Technologies (www.cit.fraunhofer.de), Volkswagen and the Fraunhofer Institute for Integrated Circuits IIS. “Knowledge related to Berlin has been collated into a knowledge graph, where each building represents a point on the graph and forms connections with other points. As a result, we can gather progressively more information and constantly expand the knowledge base. This is what allows answering complex questions instead of restricting inquiries to a limited number of prescribed questions,” explains Lehmann. In a manufacturing context, this sort of knowledge graph could report on the status of machines, for example, or answer questions about components produced in the last hour. The knowledge graphs used for the trade show exhibition draw on a variety of data sources including Dbpedia (http://dbedia.org) and OpenStreetMap. A special feature of the voice assistant is that it is also able to harness unstructured knowledge, such as text documents on museums, for instance. With these systems, you have not only the physical machine in the production hall, but also a virtual counterpart that is fed with real data. This data can be interrogated using dialog or question answering systems. “While question answering systems directly answer a single question, dialogue systems support multiple interaction steps with sequences of questions and answers. A dialog system will also respond to sequences of inquiries and small talk, just like the exhibit we will have on display,” says Lehmann. The more training data, the smarter the voice assistant “It is the domain-specific knowledge that makes a voice assistant smart. The technical challenge from our side lies in developing a system that can understand users’ queries and respond appropriately using the knowledge contained in the knowledge graph,” the researcher concludes. Developing such a system calls for the application of the latest techniques in machine learning, techniques that the researchers at Fraunhofer IAIS are constantly developing and refining. The expertise they have assembled in machine learning and domain-specific knowledge puts them at the top of their field internationally. Tailored to the respective domains, the experts select the appropriate machine learning algorithms and train them using sample dialogs and question-answer pairs. The intelligence of the voice assistant grows with the amount of training data it amasses. The voice assistants developed by Fraunhofer IAIS offer their users the ultimate digital experience and are all GDPR-compliant
The Fraunhofer Institute for Ceramic Technologies and Systems IKTS plans to showcase its high-temperature battery cerenergy – and specifically the 5 kWh, 20 battery cell model – at Energy Storage Europe 2019 in Düsseldorf in mid-March. The sodium-nickel chloride battery is primarily based on sodium chloride, one of the most cost-efficient raw materials in the world. No rare earths or other strategic resources are used. In addition to sodium chloride, only a ceramic Na ion-conducting electrolyte made of doped aluminum oxide, as well as nickel and iron are required. Together they create an energy storage system with an overall efficiency of > 90% and an energy density of 130 watt hours per kilogram. The operating temperature, which easily reaches 300 °C for ceramic battery solutions, is shielded from outside influences by a vacuum insulation.
Voodoo Manufacturing is based in the New York borough of Brooklyn and specializes in delivering 3D print jobs. The company is focused on industrial mass production, but is in competition with service providers who use conventional injection molding processes. To utilize the more than 200 3D printers on the company’s approximately 1700 m2 premises more efficiently, the business is using a UR10-model cobot from Danish market leader Universal Robots. The robot arm is mounted on a mobile base and can reach around 100 of the installed 3D printers. It is responsible for removing used printing plates from the equipment, placing them on a conveyor belt and loading the printers with new plates. Automation has allowed Voodoo Manufacturing to triple its production – not least because the UR10 also works at night, monitored by proprietary software. With an additional UR10, the company hopes to increase utilization of its printer capacity from the current level of 30-40% to around 90%, further reducing production costs. The firm’s long-term goal is to install up to 10,000 3D printers served by several cobots in order to work more cost-effectively than the injection-molding industry.
The e-mobility trend is creating a new problem. What to do with all the old batteries that still work but are unsuitable for driving due to deteriorating performance? Swedish automotive group Volvo is now taking part in a project putting retired bus batteries to use in a solar installation. Specifically, the project involves the new Viva residential complex owned by housing cooperative Riksbyggen in Göteborg, which was designed as a sustainable project. Under an energy supply plan drawn up in collaboration with energy provider Göteborg Energi and the Johanneberg Science Park, energy from the photovoltaic installations on the roofs of the apartment buildings is stored by batteries previously installed in the electric buses on line 55 in Göteborg. The installations deploy 14 used lithium-ion batteries, linked up to create a 200 kWh storage unit. They are intended to store excess electricity from the solar installation so that it can be made available at peak times or even sold. The batteries can also be used to store electricity from the national power grid.
According to the Japanese group, its new TM cobots assist human employees with highly repetitive tasks such as fitting, assembling or inspecting components. Thanks to their ease of programming via a flowchart-based interface, they should also be able to handle frequent product changes. For the time being, the series comprises 12 models, which come with an arm length of 700, 900, 1100 or 1300 mm and are equipped to carry a load of between 4 kg and 14 kg. The robot arm has an integrated image processing and lighting system for scanning products from numerous angles. The software offers a range of features such as pattern and color recognition and barcode scanning. With ISO 10218-1 and ISO/TS 15066 certification, the robots meet all current safety standards for human-machine collaboration. The series also includes a model that is compatible with mobile robots from Omron’s LD line. It was only in November 2018 that the Japanese group agreed a strategic partnership with Berlin-based company InSystems Automation, working in the field of mobile robotic systems to develop bespoke solutions for automated material handling.
The automotive supplier Bosch, together with the Chamber of Industry and Commerce (IKH) Stuttgart and other partners from science and industry, developed the Industry 4.0 (IHK) training course and has already tested it in a pilot project. The course is aimed at skilled workers with professional experience in production or logistics and is completed with a certificate. Subdivided into five modules, it conveys an understanding of current technologies and data transmission options as well as how logistics and supply chains function in the digital world. It focuses on technical content and working methods such as Scrum . According to Bosch and the project partners, this is the first course that directly targets skilled workers and qualifies them for the requirements of networked production. The first twelve participants from the Bosch plant in Stuttgart-Feuerbach have already successfully completed the course. The IHK plans to offer the courses throughout Germany for all companies starting 2019.
According to managing director Stefan Studer , its symbolic acceptance of the humanoid robot as a member is about “questioning the self-image of labor unions”. This is intended to make both members and businesses more aware of unresolved issues relating to digitization and, in particular, with regard to working with cobots. The market for collaborative robots has grown significantly in recent years. Machines equipped with artificial intelligence and sensitive sensors are already working side by side with human colleagues in many companies. Businesses hope that using such robots will allow them to cope with the skills shortage . But unions fear that in the medium term this could push people out of their jobs. And there are other sticking points to consider: what happens if something goes wrong in this new form of collaboration? Who is liable for the damages? Can robots be placed under an obligation, or might they also have rights? By admitting a robot, the Swiss union hopes to shine more light on questions of this kind.
HMI-ID11-089rf_NRL Two-layer solar cells improve energy efficiency (Picture: NREL (NREL is a national laboratory of the U.S. Department of Energy)) At the UCLA Samueli School of Engineering in Los Angeles, materials scientists have developed a new type of thin-film solar cell that generates more energy from sunlight than conventional cells do. The element features a base consisting of a 2 μm layer of copper, indium, gallium and selenide (CIGS). The team led by Professor Yang Yang then applied a 1 μm layer of perovskite, a cost-effective lead and iodine compound. The two layers are connected by a nanoscale interface that was also developed at UCLA. It gives the solar element a higher level of voltage, allowing it to generate more energy. The two layers are affixed to an approximately 2 mm glass substrate. The CIGS base layer alone achieves an efficiency of around 18.7%. Together with the perovskite layer, the efficiency increases to 22.4%. The additional performance has now been confirmed by independent tests in the National Renewable Energy Laboratory (NREL) of the US Department of Energy. Professor Yang Yang expects to be capable of improving the efficiency of these two-layer cells by an additional 30%.