2019年4月28日 星期日

紐約大樓要環保,可不禁鋼筋與玻璃,算數嗎? New York’s ‘Ban’ on Glass and Steel Skyscrapers Isn’t a Ban at All


New York’s ‘Ban’ on Glass and Steel Skyscrapers Isn’t a Ban at All

The city may require more eco-friendly building materials, but neither glass nor steel would be prohibited.

2019年4月26日 星期五

Adidas is making a recyclable shoe from reclaimed ocean plastic





It took adidas nearly 10 years to figure out how to build a shoe that can be recycled and turned into another shoe. The solution? Switching from mixed materials to a single material, which isn't easy.
關於這個網站

2019年4月23日 星期二

MIT Technology Review: Triton is the world’s most murderous malware, and it’s spreading. Hackers have crafted malware that's designed to kill people. Here's what we know about it.






Triton is the world’s most murderous malware, and it’s spreading

The rogue code can disable safety systems designed to prevent catastrophic industrial accidents. It was discovered in the Middle East, but the hackers behind it are now targeting companies in North America and other parts of the world, too.
by Martin Giles
Mar 5


ARIEL DAVIS

As an experienced cyber first responder, Julian Gutmanis had been called plenty of times before to help companies deal with the fallout from cyberattacks. But when the Australian security consultant was summoned to a petrochemical plant in Saudi Arabia in the summer of 2017, what he found made his blood run cold.

The hackers had deployed malicious software, or malware, that let them take over the plant’s safety instrumented systems. These physical controllers and their associated software are the last line of defense against life-threatening disasters. They are supposed to kick in if they detect dangerous conditions, returning processes to safe levels or shutting them down altogether by triggering things like shutoff valves and pressure-release mechanisms.

The malware made it possible to take over these systems remotely. Had the intruders disabled or tampered with them, and then used other software to make equipment at the plant malfunction, the consequences could have been catastrophic. Fortunately, a flaw in the code gave the hackers away before they could do any harm. It triggered a response from a safety system in June 2017, which brought the plant to a halt. Then in August, several more systems were tripped, causing another shutdown.


Sign up for The Download — your daily dose of what's up in emerging technology
Stay updated on MIT Technology Review initiatives and events?
YesNo

The first outage was mistakenly attributed to a mechanical glitch; after the second, the plant's owners called in investigators. The sleuths found the malware, which has since been dubbed “Triton” (or sometimes “Trisis”) for the Triconex safety controller model that it targeted, which is made by Schneider Electric, a French company.

In a worst-case scenario, the rogue code could have led to the release of toxic hydrogen sulfide gas or caused explosions, putting lives at risk both at the facility and in the surrounding area.

Gutmanis recalls that dealing with the malware at the petrochemical plant, which had been restarted after the second incident, was a nerve-racking experience. “We knew that we couldn’t rely on the integrity of the safety systems,” he says. “It was about as bad as it could get.”

In attacking the plant, the hackers crossed a terrifying Rubicon. This was the first time the cybersecurity world had seen code deliberately designed to put lives at risk. Safety instrumented systems aren’t just found in petrochemical plants; they’re also the last line of defense in everything from transportation systems to water treatment facilities to nuclear power stations.

Triton’s discovery raises questions about how the hackers were able to get into these critical systems. It also comes at a time when industrial facilities are embedding connectivity in all kinds of equipment—a phenomenon known as the industrial internet of things. This connectivity lets workers remotely monitor equipment and rapidly gather data so they can make operations more efficient, but it also gives hackers more potential targets.

Those behind Triton are now on the hunt for new victims. Dragos, a firm that specializes in industrial cybersecurity, and where Gutmanis now works, says it’s seen evidence over the past year or so that the hacking group that built the malware and inserted it into the Saudi plant is using some of the same digital tradecraft to research targets in places outside the Middle East, including North America. And it’s creating new strains of the code in order to compromise a broader range of safety instrumented systems.

Red alert

News of Triton’s existence was revealed in December 2017, though the identity of the plant’s owner has been kept secret. (Gutmanis and other experts involved in the initial investigation decline to name the company because they fear doing so might dissuade future targets from sharing information about cyberattacks privately with security researchers.)
Some notable cyber-physical threats
2010 💥 StuxnetDeveloped by America’s National Security Agency, working in conjunction with Israeli intelligence, the malware was a computer worm, or code that replicates itself from computer to computer without human intervention. Most likely smuggled in on a USB stick, it targeted programmable logic controllers which govern automated processes, and caused the destruction of centrifuges used in the enrichment of uranium at a facility in Iran.
2013 🕵️‍♂️ HavexHavex was designed to snoop on systems controlling industrial equipment, presumably so that hackers could work out how to mount attacks on the gear. The code was a remote access Trojan, or RAT, which is cyber-speak for software that lets hackers take control of computers remotely. Havex targeted thousands of US, European, and Canadian businesses, and especially ones in the energy and petrochemical industries.
2015 ⚡️ BlackEnergyBlackEnergy, which is another Trojan, had been circulating in the criminal underworld for a while before it was adapted by Russian hackers to launch an attack in December 2015 on several Ukranian power companies that helped trigger blackouts. The malware was used to gather intelligence about the power companies’ systems, and to steal log-in credentials from employees.
2016 ⚡️ CrashOverrideAlso known as Industroyer, this was developed by Russian cyber warriors too, who used it to mount an attack on a part of Ukraine’s electrical grid in December 2016. The malware replicated the protocols, or communications languages, that different elements of a grid used to talk to one another. This let it do things like show that a circuit breaker is closed when it’s really open. The code was used to strike an electrical transmission substation in Kiev, blacking out part of the city for a short time.

Over the past couple of years, cybersecurity firms have been racing to deconstruct the malware—and to work out who’s behind it. Their research paints a worrying picture of a sophisticated cyberweapon built and deployed by a determined and patient hacking group whose identity has yet to be established with certainty.

The hackers appear to have been inside the petrochemical company’s corporate IT network since 2014. From there, they eventually found a way into the plant’s own network, most likely through a hole in a poorly configured digital firewall that was supposed to stop unauthorized access. They then got into an engineering workstation, either by exploiting an unpatched flaw in its Windows code or by intercepting an employee’s login credentials.

Since the workstation communicated with the plant’s safety instrumented systems, the hackers were able to learn the make and model of the systems’ hardware controllers, as well as the versions of their firmware—software that’s embedded in a device’s memory and governs how it communicates with other things.

It’s likely they next acquired an identical Schneider machine and used it to test the malware they developed. This made it possible to mimic the protocol, or set of digital rules, that the engineering workstation used to communicate with the safety systems. The hackers also found a “zero-day vulnerability”, or previously unknown bug, in the Triconex model’s firmware. This let them inject code into the safety systems’ memories that ensured they could access the controllers whenever they wanted to.

Thus, the intruders could have ordered the safety instrumented systems to disable themselves and then used other malware to trigger an unsafe situation at the plant.

The results could have been horrific. The world’s worst industrial disaster to date also involved a leak of poisonous gases. In December 1984 a Union Carbide pesticide plant in Bhopal, India, released a vast cloud of toxic fumes, killing thousands and causing severe injuries to many more. The cause that time was poor maintenance and human error. But malfunctioning and inoperable safety systems at the plant meant that its last line of defense failed.

More red alerts

There have been only a few previous examples of hackers using cyberspace to try to disrupt the physical world. They include Stuxnet, which caused hundreds of centrifuges at an Iranian nuclear plant to spin out of control and destroy themselves in 2010, and CrashOverride, which Russian hackers used in 2016 to strike at Ukraine’s power grid. (Our sidebar provides a summary of these and other notable cyber-physical attacks.)

However, not even the most pessimistic of cyber-Cassandras saw malware like Triton coming. “Targeting safety systems just seemed to be off limits morally and really hard to do technically,” explains Joe Slowik, a former information warfare officer in the US Navy, who also works at Dragos.

Other experts were also shocked when they saw news of the killer code. “Even with Stuxnet and other malware, there was never a blatant, flat-out intent to hurt people,” says Bradford Hegrat, a consultant at Accenture who specializes in industrial cybersecurity.

ARIEL DAVIS

It’s almost certainly no coincidence that the malware appeared just as hackers from countries like Russia, Iran, and North Korea stepped up their probing of “critical infrastructure” sectors vital to the smooth running of modern economies, such as oil and gas companies, electrical utilities, and transport networks.

In a speech last year, Dan Coats, the US director of national intelligence, warned that the danger of a crippling cyberattack on critical American infrastructure was growing. He drew a parallel with the increased cyber chatter US intelligence agencies detected among terrorist groups before the World Trade Center attack in 2001. “Here we are nearly two decades later, and I’m here to say the warning lights are blinking red again,” said Coats. “Today, the digital infrastructure that serves this country is literally under attack.”

At first, Triton was widely thought to be the work of Iran, given that it and Saudi Arabia are archenemies. But cyber-whodunnits are rarely straightforward. In a report published last October, FireEye, a cybersecurity firm that was called in at the very beginning of the Triton investigation, fingered a different culprit: Russia.

The hackers behind Triton had tested elements of the code used during the intrusion to make it harder for antivirus programs to detect. FireEye’s researchers found a digital file they had left behind on the petrochemical company’s network, and they were then able to track down other files from the same test bed. These contained several names in Cyrillic characters, as well as an IP address that had been used to launch operations linked to the malware.

That address was registered to the Central Scientific Research Institute of Chemistry and Mechanics in Moscow, a government-owned organization with divisions that focus on critical infrastructure and industrial safety. FireEye also said it had found evidence that pointed to the involvement of a professor at the institute, though it didn’t name the person. Nevertheless, the report noted that FireEye hadn’t found specific evidence proving definitively that the institute had developed Triton.

Researchers are still digging into the malware’s origins, so more theories about who’s behind it may yet emerge. Gutmanis, meanwhile, is keen to help companies learn important lessons from his experience at the Saudi plant. In a presentation at the S4X19 industrial security conference in January, he outlined a number of them. They included the fact that the victim of the Triton attack had ignored multiple antivirus alarms triggered by the malware, and that it had failed to spot some unusual traffic across its networks. Workers at the plant had also left physical keys that control settings on Triconex systems in a position that allowed the machines’ software to be accessed remotely.
Triton: a timeline
2014Hackers gain access to network of Saudi plant
June 2017First plant shutdown
August 2017Second plant shutdown
December 2017Cyberattack made public
October 2018Fireeye says Triton most likely built in Russian lab
January 2019More details emerge of Triton incident response

If that makes the Saudi business sound like a security basket case, Gutmanis says it isn’t. “I’ve been into a lot of plants in the US that were nowhere near as mature [in their approach to cybersecurity] as this organization was,” he explains.

Other experts note that Triton shows government hackers are now willing to go after even relatively obscure and hard-to-crack targets in industrial facilities. Safety instrumented systems are highly tailored to safeguard different kinds of processes, so crafting malware to control them involves a great deal of time and painstaking effort. Schneider Electric’s Triconex controller, for instance, comes in dozens of different models, and each of these could be loaded with different versions of firmware.

That hackers went to such great lengths to develop Triton has been a wake-up call for Schneider and other makers of safety instrumented systems—companies like Emerson in the US and Yokogawa in Japan. Schneider has drawn praise for publicly sharing details of how the hackers targeted its Triconex model at the Saudi plant, including highlighting the zero-day bug that has since been patched. But during his January presentation, Gutmanis criticized the firm for failing to communicate enough with investigators in the immediate aftermath of the attack.

Schneider responded by saying it had cooperated fully with the company whose plant was targeted, as well as with the US Department of Homeland Security and other agencies involved in investigating Triton. It has hired more people since the event to help it respond to future incidents, and has also beefed up the security of the firmware and protocols used in its devices.

Andrew Kling, a Schneider executive, says an important lesson from Triton’s discovery is that industrial companies and equipment manufacturers need to focus even more on areas that may seem like highly unlikely targets for hackers but could cause disaster if compromised. These include things like software applications that are rarely used and older protocols that govern machine-to-machine communication. “You may think nobody’s ever going to bother breaking [an] obscure protocol that’s not even documented,” Kling says, “but you need to ask, what are the consequences if they do?”

ARIEL DAVIS

An analog future?

Over the past decade or so, companies have been adding internet connectivity and sensors to all kinds of industrial equipment. The data captured is being used for everything from predictive maintenance—which means using machine-learning models to better anticipate when equipment needs servicing—to fine-tuning production processes. There’s also been a big push to control processes remotely through things like smartphones and tablets.

All this can make businesses much more efficient and productive, which explains why they are expected to spend around $42 billion this year on industrial internet gear such as smart sensors and automated control systems, according to the ARC Group, which tracks the market. But the risks are also clear: the more connected equipment there is, the more targets hackers have to aim at.

To keep attackers out, industrial companies typically rely on a strategy known as “defense in depth.” This means creating multiple layers of security, starting with firewalls to separate corporate networks from the internet. Other layers are intended to prevent hackers who do get in from accessing plant networks and then industrial control systems.

These defenses also include things like antivirus tools to spot malware and, increasingly, artificial-intelligence software that tries to spot anomalous behavior inside IT systems. Then, as the ultimate backstop, there are the safety instrumented systems and physical fail-safes. The most critical systems typically have multiple physical backups to guard against the failure of any one element.

The strategy has proved robust. But the rise of nation-state hackers with the time, money, and motivation to target critical infrastructure, as well as the increasing use of internet-connected systems, means the past may well not be a reliable guide to the future.

Russia, in particular, has shown that it’s willing to weaponize software and deploy it against physical targets in Ukraine, which it has used as a testing ground for its cyber arms kit. And Triton’s deployment in Saudi Arabia shows that determined hackers will spend years of prodding and probing to find ways to drill through all those defensive layers.

Fortunately, the Saudi plant’s attackers were intercepted, and we now know a great deal more about how they worked. But it’s a sobering reminder that, just like other developers, hackers make mistakes too. What if the bug they inadvertently introduced, instead of triggering a safe shutdown, had disabled the plant’s safety systems just when a human error or other mistake had caused one of the critical processes in the plant to go haywire? The result could have been a catastrophe even if the hackers hadn’t intended to cause it.

Experts at places like the US’s Idaho National Laboratory are urging companies to revisit all their operations in the light of Triton and other cyber-physical threats, and to radically reduce, or eliminate, the digital pathways hackers could use to get to critical processes.

Businesses may chafe at the costs of doing that, but Triton is a reminder that the risks are increasing. Gutmanis thinks more attacks using the world’s most murderous malware are all but inevitable. “While this was the first,” he says, “I’d be surprised if it turns out to be the last.”





MIT Technology Review
Hackers have crafted malware that's designed to kill people. Here's what we know about it.



關於這個網站

TECHNOLOGYREVIEW.COM

Triton is the world’s most murderous malware, and it’s spreading




2019年4月13日 星期六

Medical Frontiers | NHK WORLD-JAPAN On Demand: Gums, Germs and Healing... "After Forty...What?"


Prevention to cure! The groundbreaking world of Japanese medical technology and healthcare. From food and exercise to the latest treatments.




28m 00s




Steve McMullen 發文到 Vintage Weird

From a 1934 pamphlet titled "After Forty...What?" which discussed tooth decay in middle age.

2019年4月10日 星期三

Air Conditioning Caused the Fire That Claimed Brazil's National Museum



According to the investigation, funding issues impacted the installation of fire protection devices such as hoses, water sprinklers, alarms, and fire doors.

2019年4月9日 星期二

孟山都是否左右了Roundup除草劑安全性報告?



多年來,孟山都公司的科學家與外部研究人員通過密切的合作研究得出結論,認為該公司的Roundup除草劑是安全的。
但目前這一合作關係成為Roundup除草劑及其新東家拜耳公司面臨的最大的不利因素之一。拜耳正面臨越來越多指稱這種全球使用最廣泛的除草劑致癌的訴訟。
在一系列針對拜耳的高風險訴訟案件中,原告律師都重點提到孟山都與科學界的合作關係。自去年6月份拜耳收購孟山都以來,加州已有兩個陪審團站到原告一邊,這些原告認為Roundup除草劑致使他們患上淋巴瘤。拜耳股價自首次判決以來已纍計下跌約35%。
在這兩起案件中,原告律師認為孟山都對有關Roundup活性成分安全性的外部研究施加了影響。這些律師所獲孟山都部分數位郵件顯示,外部科學家讓該公司的科學家評估他們的研究報告初稿,而孟山都的科學家做出了修改建議。