Search term categories
Country
  • Revenue and employees of "Construction of buildings" compani…
    105    

    NEWS

       |   24/03/2019    
    Revenue and employees of "Construction of buildings" compani…
     

    ▣ CATEGORY OF COMPANIES (NACE Rev.2) Construction of buildings Construction of residential and non-residential buildings Development of building projects

  • Revenue and employees of "Civil engineering" companies in EU
    102    

    NEWS

       |   24/03/2019    
    Revenue and employees of "Civil engineering" companies in EU
     

    ▣ CATEGORY OF COMPANIES (NACE Rev.2) Civil engineering Construction of bridges and tunnels Construction of other civil engineering projects Construction of other civil engineering projects n.e.c. Construction of railways and underground railways Construction of roads and motorways Construction of roads and railways Construction of utility projects Construction of utility projects for electricity and telecommunications Construction of utility projects for fluids Construction of water projects

  • Machine learning tracks moving cells
    96    

    NEWS

       |   18/03/2019    
    Machine learning tracks moving cells
    Plants & Machinery  AI, Robot, Industry 4.0

    Both developing babies and elderly adults share a common characteristic: the many cells making up their bodies are always on the move. As we humans commute to work, cells migrate through the body to get their jobs done. Biologists have long struggled to quantify the movement and changing morphology of cells through time, but now, scientists at the Okinawa Institute of Science and Technology Graduate University (OIST) have devised an elegant tool to do just that. Using machine learning, the researchers designed a software to analyze microscopic snapshots of migrating cells. They named the software Usiigaci, a Ryukyuan word that refers to tracing the outlines of objects, as the innovative tool detects the changing outlines of individual cells. Usiigaci, described in a paper published March 13, 2019 in SoftwareX, is now available online for anyone to use, along with a video tutorial explaining the software. In the womb, a baby’s cells migrate to precise locations so that each arm, leg, and organ grows in its proper place. Our immune cells race through the body to mend wounds after injury. Cancerous cells metastasize by traveling through the body, spreading tumors to new tissues. To test the efficacy of new medicines, drug developers track the movement of cells before and after treatment. The Usiigaci software finds applications in all these areas of study and more. “This is an all-in-one solution to get us from raw images to quantitative data on cell migration,” said Hsieh-Fu Tsai, first author of the study. Tsai is a graduate student and a Japan Society for the Promotion of Science (JSPS) DC1 research fellow in the OIST Micro/Bio/Nanofluidics Unit, led by Prof. Amy Shen. “Our software is at least 100 times faster than manual methods, which are currently the gold-standard for these types of experiments because computers are not yet powerful enough.” “We’re hoping this software can become quite useful for the scientific community,” said Prof. Amy Shen, principal investigator of the unit and senior author of the study. “For any biological study or drug screening that requires you to track cellular responses to different stimuli, you can use this software.” Usiigaci in Action The Micro/Bio/Nanofluidics Unit has devised a machine learning software to segment, track, and analyze the movement of migrating cells. Named Usiigaci, a Ryukyuan word that means “tracing,” the software significantly outperforms existing programs and has many applications across biology and medicine. Machine Learning Makes Usiigaci Adaptable In order to observe cells under the microscope, scientists often steep them in dye or tweak their genes to make them glow in eye-popping colors. But coloring cells alters their movement, which in turn skews the experimental results. Some scientists attempt to study cell migration without the help of fluorescent tags, using so-called “label-free” methods, but end up running into a different problem; Label-free cells blend into the background of microscopic images, making them incredibly difficult to analyze with existing computer software. Usiigaci hops this hurdle by allowing scientists to train the software over time. Biologists act as teachers, providing the software new images to study so that it can come to recognize one cell from the next. A fast learner, the program quickly adapts to new sets of data and can easily track the movement of single cells, even if they’re crammed together like commuters on the Tokyo metro. “Most software...cannot tell cells in high-density apart; basically, they’re segmenting into a glob,” said Tsai. “With our software, we can segment correctly even if cells are touching. We can actually do single-cell tracking throughout the entire experiment.” Usiigaci is currently the fastest software capable of tracking the movement of label-free cells at single-cell resolution on a personal laptop. Prof. Amy Shen (left) and Hsieh-Fu Tsai of the Micro/Bio/Nanofluidics Unit stand beside the microscope that they use to capture images of migrating cells. As an initial step to pursue his thesis project, Tsai designed a software, called Usiigaci, to analyze these images and quantify the movement and changing morphology of cells through time. Software Mimics the Human Brain The researchers designed Usiigaci to process images as if it were a simplified human brain. The strategy enables the software to trace the outlines of individual cells, monitor their movement moment to moment, and transform that information into crunchable numbers. The program is built around a machine learning infrastructure known as a “convolutional neural network.” roughly based on how brain cells work together to process incoming information from the outside world. When our eyes capture light from the environment, they call on neurons to analyze those signals and figure out what we’re looking at and where it is in space. The neurons first sketch out the scene in broad strokes then pass the information on to the next set of cells, progressively rendering the image in more and more detail. Neural networks work similarly, except each “neuron” is a collection of code rather than a physical cell. This design grants Usiigaci its accuracy and adaptability. Looking forward, the researchers aim to develop neural networks to identify different components within cells, rather than just their outlines. With these tools in hand, scientists could easily assess whether a cell is healthy or diseased, young or old, derived from one genetic lineage or another. Like Usiigaci, these programs would have utility in fundamental biology, biotechnology research and beyond.

  • Imaging: A peek into lymph nodes
    66    

    NEWS

       |   18/03/2019    
    Imaging: A peek into lymph nodes
    Medical & Healthcare  Medical Test Services

    The vast majority of cancer deaths occur due to the spread of cancer from one organ to another, which can happen either through the blood or the lymphatic system. However, it can be tricky to detect this early enough. Researchers at Tohoku University have developed a new method that would allow doctors to detect cancers in the lymph nodes while they are still small, before they travel to other parts of the body. This can greatly increase the chances of a successful treatment. There aren't many imaging techniques that can detect tumors in lymph nodes before they grow too large, especially in smaller nodes. Biopsies of lymph nodes is a possible option, but it can often give false negative results. So the team wanted to come up with a new method that would accurately detect the earliest stages of a cancer moving to another part of the body, using a technique called x-ray microcomputed tomography (micro-CT) (imaging supplies in the catalogue of MEDICA 2018). The team tested their new method on mice with breast cancer cells inserted into their lymph nodes. They injected a contrast agent at a slow, steady pace into the lymph nodes upstream of those carrying the cancer cells. As the contrast agent made its way through the lymphatic system, the researchers were able to map out its movement using micro-CT. Initially, the researchers did not observe any change in the flow of the contrast agent. However, after 28 days of injecting the cancer cells into the lymph nodes, they had divided and grown to a point where they blocked the flow of the contrast agent, creating empty pockets in the scan that did not have any contrast agent. By comparing the shape of the lymph node and the areas that contained the contrast agent, the researchers were able to get a clear picture of the presence of cancer cells there. Next, the researchers would like to hone in on better contrast agents that would offer a clearer, more precise picture of how cancer cells are moving around the lymphatic system. In the future, this technique could be an effective way to detect tumors early before they spread around the body, saving many lives and adding one more tool that doctors can turn to in their fight against cancer.

  • Using 3D models to reduce side effects of radiotherapy
    93    

    NEWS

       |   15/03/2019    
    Using 3D models to reduce side effects of radiotherapy
    Medical & Healthcare  Medical Test Services

    The debilitating side effects of radiotherapy could soon be a thing of the past thanks to a breakthrough by University of South Australia (UniSA) and Harvard University researchers. UniSA biomedical engineer Professor Benjamin Thierry is leading an international study using organ-on-a-chip technology to develop 3D models to test the effects of different levels and types of radiation. A microfluidic cell culture chip closely mimics the structure and function of small blood vessels within a disposable device the size of a glass slide, allowing researchers and clinicians to investigate the impact of radiotherapy on the body’s tissues. To date, scientists have relied on testing radiotherapy on cells in a two-dimensional environment on a slide. Professor Thierry, from UniSA’s Future Industries Institute (FII) and the ARC Centre of Excellence in Convergent Bio-Nano Science and Technology (CBNS), says the organ-on-a-chip technology could reduce the need for animal studies and irrelevant invitro work, both of which have major limitations. “An important finding of the study is that endothelial cells grown in the standard 2D culture are significantly more radiosensitive than cells in the 3D vascular network. This is significant because we need to balance the effect of radiation on tumour tissues while preserving healthy ones,” Prof Thierry says. The findings, published in Advanced Materials Technologies, will allow researchers to fully investigate how radiation impacts on blood vessels and – soon – all other sensitive organs. “The human microvasculature (blood vessel systems within organs) is particularly sensitive to radiotherapy and the model used in this study could potentially lead to more effective therapies with fewer side effects for cancer patients,” Prof Thierry says. More than half of all cancer patients receive radiotherapy at least once in the course of their treatment. While it cures many cancers, the side effects can be brutal and sometimes lead to acute organ failure and long-term cardiovascular disease. Prof Thierry’s team, including UniSA FII colleague Dr Chih-Tsung Yang and PhD student Zhaobin Guo, are working in close collaboration with the Royal Adelaide Hospital and Harvard University’s Dana-Farber Cancer Institute with the support of the Australian National Fabrication Facility. “Better understanding the effect of radiotherapy on blood vessels within organs – and more generally on healthy tissues – is important, especially where extremely high doses and types of radiation are used,” Dr Yang says. The researchers’ next step is to develop body-on-chip models that mimic the key organs relevant to a specific cancer type.

  • Wearables: thermal sensors to manage body-focused repetitive…
    93    

    NEWS

       |   15/03/2019    
    Wearables: thermal sensors to manage body-focused repetitive…
    Medical & Healthcare  Health Care Service

    The wrist-worn device, called Tingle, was also able to distinguish between behaviors directed toward six different locations on the head. The paper, "Thermal Sensors Improve Wrist-worn Position Tracking," provides preliminary evidence of the device's potential use in the diagnosis and management of excoriation disorder (chronic skin-picking), nail-biting, trichotillomania (chronic hair-pulling), and other body-focused repetitive behaviors (BFRBs). The researchers, led by Arno Klein, Ph.D., Director of Innovative Technologies, Joseph Healey Scholar, and Senior Research Scientist in the Center for the Developing Brain at the Child Mind Institute, collected data from 39 healthy adult volunteers by having them perform a series of repetitive behaviors while wearing the Tingle (find out more about Information and Communication Technology at MEDICA 2018 here). The Tingle was designed by the Institute's MATTER Lab to passively collect thermal, proximity and accelerometer data. Dr. Klein and colleagues found that the thermal sensor data improved the Tingle device's ability to accurately distinguish between a hand's position at different locations on the head, which would be useful in detecting clinically relevant BFRBs. BFRBs are related to a variety of mental and neurological illnesses (find out more about Neurological diagnosis, apparatus and instruments at MEDICA 2018 here), including autism, Tourette Syndrome and Parkinson's Disease. "Body-focused repetitive behaviors can cause significant harm and distress," said Dr. Klein. "Our findings are quite promising because they indicate that the thermal sensors (find out more about Temperature sensors at COMPAMED 2018 here) in devices like the Tingle have potential uses for many different types of hand movement training, in navigation of virtual environments, and in monitoring and mitigating repetitive, compulsive behaviors like BFRBs."

  • Data@Hand – optimizing processes in the most various applica…
    93    

    NEWS

       |   15/03/2019    
    Data@Hand – optimizing processes in the most various applica…
    Plants & Machinery  AI, Robot, Industry 4.0

    This is what the future could be like: A company directly incorporates a sensor unit that analyzes data and detects anomalies into each installation it produces. The installations are sold throughout the world, and as soon as they are in operation they transmit their data to a common cloud. In this way all installations throughout the world can learn from each other how normal operation should proceed. If there is a deviation this is recognized, even if this unknown pattern has never occurred in the individual installation. "What's special about this is that we can react to operating conditions that have never yet arisen before and have a system that is constantly evolving. Through learning by itself it recognizes normal operating conditions and deviations" explains Dr.-Ing. Mario Aehnelt, Head of the Department "Visual Assistance Technologies" at the Fraunhofer IGD in Rostock. Optimum algorithm incorporation for every customer Data@Hand is an information and data tool for humans in working processes which is aimed at process optimization and is based on the principles of machine learning and artificial intelligence. It supports the analysis of complex data volumes but leaves the specific decisions on how to react to anomalies to the professional expert. Through individually addressed questions Data@Hand ensures optimum algorithm incorporation for every customer. In the same way as, for example, vital data of a patient, machine data from production can be evaluated more quickly. Analysis can not only take place via a powerful server-based platform, but also on ultra-small systems directly at the machine or patient. Data@Hand can also be connected to existing AI tools and data processing platforms (MES/ERP) or be used for visual data formatting, by way of Plant@Hand3D or Health@Hand, for example. Customers can therefore work using systems they are already familiar with. Live data analysis at the Hanover Trade Fair At the Hanover Trade Fair scientists from the Fraunhofer IGD will show how a real additional value can be generated from a pure data collection through intelligent analysis with Data@Hand and the visualization of critical conditions. In an example demonstration the operating conditions in a compressor unit will be modified to different degrees and the machine parameters of temperature, vibration and power uptake will be analyzed. The recognition analysis runs on the directly connected sensor unit. With these data, anomalies and new operating influences are identified in real time. As soon as the operating behavior deviates from normal a warning is given. With the obtained data, not only the causes of problems can be analyzed, but it is also possible to predict what contributes to reducing maintenance costs.

  • PIX-Torque'ing Points
    117    

    PIX Transmissions Ltd

       |   13/03/2019    
    PIX-Torque'ing Points
    Industrial Goods  Transmission tools and accessories

    PIX News Letter

  • Sophisticated 3D measurement technology permits gesture-base…
    135    

    NEWS

       |   10/03/2019    
    Sophisticated 3D measurement technology permits gesture-base…
    Plants & Machinery  AI, Robot, Industry 4.0

    As gesture control represents a seamless interface between humans and machines, more and more machines, robots and devices are able to respond to gestural cues. Researchers at Fraunhofer IOF in Jena are raising human-machine interaction to a new level: the high-speed 3D measurement and sensor technology developed in the 3D-LivingLab research project (see “3D-LivingLab” box) enables them to capture and interprete even complex movements – and does so in real time. At Hannover Messe 2019, the research team is demonstrating its gesture-based human-machine interaction technology using the example of a wall made up of 150 spheres, which copies in 3D every head, arm and hand movement of a person standing in front of it. The wall of spheres effectively imitates the body movements with contact-free 3D reactions in real time, free from irritating time lags. The wall of spheres was created as part of the “3D-LivingLab“ project. Workflows are greatly simplified The system is made up of several modules: 3D sensor, 3D data processing and image fusion as well as the actuator system itself comprising 150 individual actuators. “The wall of spheres is not only a great toy, it also represents cutting-edge technology. Real-time 3D capture and interpretation of multiple gestures without tracking sensors can radically simplify workflows – from production scenarios to health and safety,” says Dr. Peter Kühmstedt, scientist and group leader at Fraunhofer IOF. The demonstrator system responds to the behavior of people, captures complex movements such as gestures and physical actions, and gives real-time feedback through a technical actuator system that converts electrical signals into movement on the wall of spheres. It is the person’s posture that controls the actuators. Specially developed algorithms enable human 3D movements to trigger control of the actuators, thus causing the spheres to move. “We are demonstrating very rapid measurement technology – the data is captured by a new generation of 3D sensors –, very rapid low-latency processing – the data is interpreted and converted immediately – and very rapid reactions in real time. According to the calculation results, the wall of spheres immediately mirrors the movement of the person in front of it,” says the researcher. In production environments, for example, the technology could be used to monitor a worker who is interacting with a robot and handing it parts. It could also be transferred to other application fields, such as health and safety, where it can make processes safer and more efficient. Other conceivable applications for the 3D sensor technology and the interaction components are in assembly assistance and quality control systems. Moreover, they also qualify for the monitoring of biometric access points.

  • New optical imaging system finds tiny tumors
    129    

    NEWS

       |   10/03/2019    
    New optical imaging system finds tiny tumors
    Medical & Healthcare  Medical Test Services

    Many types of cancer could be more easily treated if they were detected at an earlier stage. MIT researchers have now developed an imaging system, named "DOLPHIN," which could enable them to find tiny tumors, as small as a couple of hundred cells, deep within the body. In a new study, the researchers used their imaging system, which relies on near-infrared light, to track a 0.1-millimeter fluorescent probe through the digestive tract of a living mouse. They also showed that they can detect a signal to a tissue depth of 8 centimeters, far deeper than any existing biomedical optical imaging technique. The researchers hope to adapt their imaging technology for early diagnosis of ovarian and other cancers that are currently difficult to detect until late stages. "We want to be able to find cancer much earlier," says Angela Belcher, the James Mason Crafts Prof. of Biological Engineering and Materials Science at MIT and a member of the Koch Institute for Integrative Cancer Research, and the newly-appointed head of MIT's Department of Biological Engineering. "Our goal is to find tiny tumors, and do so in a noninvasive way." Existing methods for imaging tumors all have limitations that prevent them from being useful for early cancer diagnosis. Most have a tradeoff between resolution and depth of imaging, and none of the optical imaging techniques can image deeper than about 3 centimeters into tissue. Commonly used scans such as X-ray computed tomography (CT) and magnetic resonance imaging (MRI) can image through the whole body; however, they can't reliably identify tumors until they reach about 1 centimeter in size. Belcher's lab set out to develop new optical methods for cancer imaging several years ago, when they joined the Koch Institute. They wanted to develop technology that could image very small groups of cells deep within tissue and do so without any kind of radioactive labeling. Near-infrared light, which has wavelengths from 900 to 1700 nanometers, is well-suited to tissue imaging because light with longer wavelengths doesn't scatter as much as when it strikes objects, which allows the light to penetrate deeper into the tissue. To take advantage of this, the researchers used an approach known as hyperspectral imaging, which enables simultaneous imaging in multiple wavelengths of light. The researchers tested their system with a variety of near-infrared fluorescent light-emitting probes, mainly sodium yttrium fluoride nanoparticles that have rare earth elements such as erbium, holmium, or praseodymium added through a process called doping. Depending on the choice of the doping element, each of these particles emits near-infrared fluorescent light of different wavelengths. Using algorithms that they developed, the researchers can analyze the data from the hyperspectral scan to identify the sources of fluorescent light of different wavelengths, which allows them to determine the location of a particular probe. By further analyzing light from narrower wavelength bands within the entire near-IR spectrum, the researchers can also determine the depth at which a probe is located. The researchers call their system "DOLPHIN", which stands for "Detection of Optically Luminescent Probes using Hyperspectral and diffuse Imaging in Near-infrared." To demonstrate the potential usefulness of this system, the researchers tracked a 0.1-millimeter-sized cluster of fluorescent nanoparticles that was swallowed and then traveled through the digestive tract of a living mouse. These probes could be modified so that they target and fluorescently label specific cancer cells. "In terms of practical applications, this technique would allow us to non-invasively track a 0.1-millimeter-sized fluorescently-labeled tumor, which is a cluster of about a few hundred cells. To our knowledge, no one has been able to do this previously using optical imaging techniques," Bardhan says.