What’s With The Numbers?

From The Dispatch:

As of Sunday night, 1,486,757 cases of COVID-19 have been reported in the United States (an increase of 18,961 from yesterday) and 89,562 deaths have been attributed to the virus (an increase of 808 from yesterday), according to the Johns Hopkins University COVID-19 Dashboard, leading to a mortality rate among confirmed cases of 6 percent (the true mortality rate is likely lower, but it’s impossible to determine precisely due to incomplete testing regimens). Of 11,499,203 coronavirus tests conducted in the United States (422,024 conducted since yesterday), 12.9 percent have come back positive.

I noticed last week the deaths seemed to be generally curving downwards, having dropped from 2000+ to 1100 to 700, but then curving back up towards the end of the week (an anomaly that I think must be explained by methods of counting and aggregation and when things happen more than date-of-death).

But the numbers I’m thinking about are cases of COVID-19 and some of how all this is reported.

For example, with The Dispatch we always get the mortality rate of 6% with the caveat it might be lower.

But never the “confirmed” cases percentage against population, which, as I calculate it, is 0.4%.

Or the overall fatality rate of deaths/population. Which as I calculate it would be .02%.

And that’s assuming all those deaths should be attributed to Coronavirus. As Colorado dropped a number of COVID-19 deaths from their count due to attributing mortality to other causes, there is clearly still some variation as to how Coronavirus deaths are actually assessed.

My point being, the fact that “confirmed” cases (not always the result of testing, but sometimes diagnosis by symptoms) being 0.4% seems like a relevant number, but it never seems to be put that way. Just as the overall fatality rate within the entire population being 0.02%.

The news gives us all sorts of comparisons to help people think of numbers when, saying, reporting on the national debt or a drop in the stock market or something other event or issue that involves complicated numbers. I’ve heard the national debt measured in dollar-bills around the earth or reaching to the moon or in contrast to stars in the galaxy and so on and so forth.

There doesn’t seem to be a similar urge to contextualize the coronavirus numbers. There also has been little discussion of how these numbers are achieved. Are cases all the results of tests, or assessed by symptoms, or a mix of both? Is it the same from state-to-state or country-to-country? It seems clear coronavirus deaths are not being assessed the same state-to-state.

So when we talk about surges or spikes, are we talking about real changes or maybe changes in how numbers or counted, or when data is recompiled, or something else?

From the John Hopkin’s dashboard to official state numbers, it feels to me as everything is being presented as being much more concrete and standardized and, frankly, accurate than it really is.

Just a Monday morning observation. Hope everyone is having a great (and safe) day!

Hydrogen for Energy? Splitting Water Molecule on the Cheap

FYI

Water-splitting module a source of perpetual energy

by Mike Williams,  
Water-splitting module a source of perpetual energy
A schematic and electron microscope cross-section show the structure of an integrated, solar-powered catalyst to split water into hydrogen fuel and oxygen. The module developed at Rice University can be immersed into water directly to produce fuel when exposed to sunlight. Credit: Jia Liang/Rice University

Rice University researchers have created an efficient, low-cost device that splits water to produce hydrogen fuel.

The platform developed by the Brown School of Engineering lab of Rice materials scientist Jun Lou integrates catalytic electrodes and perovskite solar cells that, when triggered by sunlight, produce electricity. The current flows to the catalysts that turn water into hydrogen and oxygen, with a sunlight-to-hydrogen efficiency as high as 6.7%.

This sort of catalysis isn’t new, but the lab packaged a perovskite layer and the electrodes into a single module that, when dropped into water and placed in sunlight, produces hydrogen with no further input.

The platform introduced by Lou, lead author and Rice postdoctoral fellow Jia Liang and their colleagues in the American Chemical Society journal ACS Nano is a self-sustaining producer of fuel that, they say, should be simple to produce in bulk.

“The concept is broadly similar to an artificial leaf,” Lou said. “What we have is an integrated module that turns sunlight into electricity that drives an electrochemical reaction. It utilizes water and sunlight to get chemical fuels.”

Perovskites are crystals with cubelike lattices that are known to harvest light. The most efficient perovskite solar cells produced so far achieve an efficiency above 25%, but the materials are expensive and tend to be stressed by light, humidity and heat.

“Jia has replaced the more expensive components, like platinum, in perovskite solar cells with alternatives like carbon,” Lou said. “That lowers the entry barrier for commercial adoption. Integrated devices like this are promising because they create a system that is sustainable. This does not require any external power to keep the module running.”

Liang said the key component may not be the perovskite but the polymer that encapsulates it, protecting the module and allowing to be immersed for long periods. “Others have developed catalytic systems that connect the solar cell outside the water to immersed electrodes with a wire,” he said. “We simplify the system by encapsulating the perovskite layer with a Surlyn (polymer) film.”

The patterned film allows sunlight to reach the solar cell while protecting it and serves as an insulator between the cells and the electrodes, Liang said.

“With a clever system design, you can potentially make a self-sustaining loop,” Lou said. “Even when there’s no sunlight, you can use stored energy in the form of chemical fuel. You can put the hydrogen and oxygen products in separate tanks and incorporate another module like a fuel cell to turn those fuels back into electricity.”

The researchers said they will continue to improve the encapsulation technique as well as the solar cells themselves to raise the efficiency of the modules.

Partial Shutdown – “unintended” security consequence (from WaPo)

Dozens of government websites have been rendered insecure or inactive.

Some NASA, Justice Department and other government agency websites were insecure or not working as of today because important security certificates had expired, according to a report from Internet security company Netcraft. With so many federal employees out, the agencies probably do not have the IT resources to renew the certificate.

Check out the link.

Madness in the Method?

https://www.cnet.com/roadshow/news/uber-orders-24000-volvos-for-self-driving-program/?ftag=CAD13782fc&bhid=100000000000000000000000100284611

 

Uber has ordered 24K self driving Volvo SUVs.

Forget that self-driving without a human monitor is not legal in most jurisdictions.  Assume Uber can rapidly obtain local approval for self driving vehicles. Assume it can cut its labor cost and sidestep its pending fight over whether its drivers are contract or employee.  Assume that by developing its own software controls for these Volvos it can customize successfully to locale and traffic patterns.

What I see is this:  Uber is banking its future on an asset base that will be pretty much worthless in 3-5 years.

I see that as a billion dollars blown every three years.  I see that as Uber having to build and staff and manage its own expert maintenance yards because it is using proprietary software, or having to contract that out at a premium.

It might be a workable model, but it is a HUGE gamble.  Yes or No?

 

 

 

 

Copied Right: How astronomers identified the first visitor from another solar system

The Economist explains
How astronomers identified the first visitor from another solar system

Neither bird, nor plane, this is A/2017 U1

Nov 3rd 2017
| by A.B.

ON October 19th Rob Weryk of the University of Hawaii saw something
rather strange. In pictures produced by Pan-STARRS 1, a telescope on
Haleakala, he identified an unusually fast-moving, faint object that
he concluded could not have originated in Earth’s solar system. It
was travelling at more than 25km per second. That is too fast for it
to have a closed, elliptical orbit around the Sun. Nor could its
velocity have been the result of the extra gravitational kick
provided by an encounter with a planet, since it arrived from well
above the ecliptic plane near which all the Sun’s planets orbit.
Indeed, after swinging around the Sun, it passed about 25km below
Earth, before speeding back above the ecliptic plane.

Observations from other telescopes have now confirmed that Dr Weryk’s
object is the first extrasolar object to be spied by astronomers.
The object was originally classified as a comet and thus named
C/2017 U1 (the “C” stands for comet). But it lacked the tail of gas
and dust produced when these icy rocks fly close to the Sun.
Furthermore, an analysis of the sunlight it reflected suggested that
the surface is mostly rock. So it has now been classified as an
asteroid, A/2017 U1, which, judging from its brightness, is about 400
metres wide.

This is puzzling. Comets are formed on the cold periphery of distant
solar systems. Asteroids reside within such systems’ interiors, where
any comet-like volatiles will have been driven off by the heat of
their parent stars. Models of planet formation suggest that
interstellar objects such as A/2017 U1 are more usually comets, as
they can be more easily dislodged from their orbits than asteroids.

One explanation is that over many millennia cosmic rays have
transformed the icy, volatile chemicals that would be expected to
stream off a comet into more stable compounds. Another is that the
Sun is not the first star A/2017 U1 has chanced upon, and its
volatile materials were boiled off by previous stellar encounters.
Or it could indeed be that the object was rocky to begin with—
perhaps once orbiting its parent star in an equivalent of our
solar system’s asteroid belt, before its ejection by an encounter
with a Jupiter-like planet.

Why, then, has nothing like A/2017 U1 been seen before? Those planet-
formation theories suggest such GULLIVER objects should be a
reasonably common sight. Perhaps the theories are wrong. Or perhaps
these interstellar visitors have been overlooked in the past,
and A/2017 U1 will now inspire a spate of such sightings in future.

Sadly for astronomers, A/2017 U1 may not be visible long enough for
these questions to be resolved decisively. It is now charging out of
the solar system towards the constellation of Pegasus—at 44km per
second. Small uncertainties in the calculation of its trajectory
may mean that where exactly it came from and where it is heading
will remain a mystery.

Quantum Future – copied right, 2017

Personal note – the story I “copied right” below is of special interest to me. Some years ago, at dinner with my friend Fred Moore, now a retired sub-atomic physics specialist and professor at UT, he described to me a lab experiment his team had successfully completed whereby a signal was sent instantaneously using the property of quanta that they are “paired”; thus it was a signal that need not be encrypted to be unintelligible in transit because no actual particle carried the signal from point A to point B – it just appeared by pairing. This was mind boggling to me then, and Fred went on to explain that their work was turned over to the feds, DARPA, I think, for investigation for military/security use. Buried in this article is the news that China is using that very technological breakthrough in a satellite that can receive and transmit these “global, unhackable” signals. In a sea of otherwise good news, I hope to God DARPA or NASA have done this too.

Quantum leaps
The strangeness of the quantum realm opens up exciting new technological possibilities
The Economist Mar 11th 2017

A BATHING cap that can watch individual neurons, allowing others to monitor the wearer’s mind. A sensor that can spot hidden nuclear submarines. A computer that can discover new drugs, revolutionise securities trading and design new materials. A global network of communication links whose security is underwritten by unbreakable physical laws. Such—and more—is the promise of quantum.

 

All this potential arises from improvements in scientists’ ability to trap, poke and prod single atoms and wispy particles of light called photons. Today’s computer chips get cheaper and faster as their features get smaller, but quantum mechanics says that at tiny enough scales, particles sail through solids, short-circuiting the chip’s innards. Quantum technologies come at the problem from the other direction. Rather than scale devices down, quantum technologies employ the unusual behaviours of single atoms and particles and scale them up. Like computerisation before it, this unlocks a world of possibilities, with applications in nearly every existing industry—and the potential to spark entirely new ones.

 

Quantum mechanics—a theory of the behaviour at the atomic level put together in the early 20th century—has a well-earned reputation for weirdness. That is because the world as humanity sees it is not, in fact, how the world works. Quantum mechanics replaced wholesale the centuries-old notion of a clockwork, deterministic universe with a reality that deals in probabilities rather than certainties—one where the very act of measurement affects what is measured.

 

Along with that upheaval came a few truly mind-bending implications, such as the fact that particles are fundamentally neither here nor there but, until pinned down, both here and there at the same time: they are in a “superposition” of here-there-ness. The theory also suggested that particles can be spookily linked: do something to one and the change is felt instantaneously by the other, even across vast reaches of space. This “entanglement” confounded even the theory’s originators.

 

It is exactly these effects that show such promise now: the techniques that were refined in a bid to learn more about the quantum world are now being harnessed to put it to good use. Gizmos that exploit superposition and entanglement can vastly outperform existing ones—and accomplish things once thought to be impossible.

 

Improving atomic clocks by incorporating entanglement, for example, makes them more accurate than those used today in satellite positioning. That could improve navigational precision by orders of magnitude, which would make self-driving cars safer and more reliable. And because the strength of the local gravitational field affects the flow of time (according to general relativity, another immensely successful but counter-intuitive theory), such clocks would also be able to measure tiny variations in gravity. That could be used to spot underground pipes without having to dig up the road, or track submarines far below the waves.

 

Other aspects of quantum theory permit messaging without worries about eavesdroppers. Signals encoded using either superposed or entangled particles cannot be intercepted, duplicated and passed on. That has obvious appeal to companies and governments the world over. China has already launched a satellite that can receive and reroute such signals; a global, unhackable network could eventually follow.

 
The advantageous interplay between odd quantum effects reaches its zenith in quantum computers. Rather than the 0s and 1s of standard computing, a quantum computer’s bits are in superpositions of both, and each “qubit” is entangled with every other. Using algorithms that recast problems in quantum-amenable forms, such computers will be able to chomp their way through calculations that would take today’s best supercomputers millennia. Even as high-security quantum networks are being developed, a countervailing worry is that quantum computers will eventually render obsolete today’s cryptographic techniques, which are based on hard mathematical problems.

 

Long before that happens, however, smaller quantum computers will make other contributions in industries from energy and logistics to drug design and finance. Even simple quantum computers should be able to tackle classes of problems that choke conventional machines, such as optimising trading strategies or plucking promising drug candidates from scientific literature. Google said last week that such machines are only five years from commercial exploitability. This week IBM, which already runs a publicly accessible, rudimentary quantum computer, announced expansion plans. As our Technology Quarterly in this issue explains, big tech firms and startups alike are developing software to exploit these devices’ curious abilities. A new ecosystem of middlemen is emerging to match new hardware to industries that might benefit.

 

The solace of quantum

 

This landscape has much in common with the state of the internet in the early 1990s: a largely laboratory-based affair that had occupied scientists for decades, but in which industry was starting to see broader potential. Blue-chip firms are buying into it, or developing their own research efforts.

 

Startups are multiplying. Governments are investing “strategically”, having paid for the underlying research for many years—a reminder that there are some goods, such as blue-sky scientific work, that markets cannot be relied upon to provide.

 

Fortunately for quantum technologists, the remaining challenges are mostly engineering ones, rather than scientific. And today’s quantum-enhanced gizmos are just the beginning. What is most exciting about quantum technology is its as yet untapped potential. Experts at the frontier of any transformative technology have a spotty record of foreseeing many of the uses it will find; Thomas Edison thought his phonograph’s strength would lie in elocution lessons. For much of the 20th century “quantum” has, in the popular consciousness, simply signified “weird”. In the 21st, it will come to mean “better”.

Carbon capture and storage – Turning air into stone 6/11/16

How to keep waste carbon dioxide in the ground
Jun 11th 2016 | From the print edition The Economist |  They probably would have given permission to reprint, right?

THIS year the world’s power stations, farms, cars and the like will generate the equivalent of nearly 37 billion tonnes of waste carbon dioxide. All of it will be dumped into the atmosphere, where it will trap infra-red radiation and warm the planet. Earth is already about 0.85°C warmer than last century’s average temperature. Thanks to the combined influence of greenhouse-gas emissions and El Niño, a heat-releasing oceanic phenomenon, 2016 looks set to be the warmest year on record, and by a long way.

 
It would be better, then, to find some method of disposing of CO 2 . One idea, carbon capture and storage (CCS), involves collecting the gas from power stations and factories and burying it underground where it can do no harm. But CCS is expensive and mostly untried. One worry is whether the buried gas will stay put. Even small fissures in the rocks that confine it could let it leak out over the course of time, undoing much of the benefit. And even if cracks are not there to begin with, the very drilling necessary to bury the gas might create them.

 
A paper just published in Science offers a possible solution. By burying CO 2 in the right sort of rock, a team of alchemists led by Juerg Matter, a geologist at Southampton University, in Britain, was able to transmute it into stone. Specifically, the researchers turned it into carbonate minerals such as calcite and magnesite. Since these minerals are stable, the carbon they contain should stay locked away indefinitely.

 
Dr Matter’s project, called CarbFix, is based in Iceland, a country well-endowed with both environmentalism and basalt. That last, a volcanic rock, is vital to the process, for it is full of elements which will readily react with carbon dioxide. Indeed, this is just what happens in nature. Over geological timescales (ie, millions of years) carbon dioxide is removed from the air by exactly this sort of weathering. Dr Matter’s scheme, which has been running since 2009, simply speeds things up.

 
Between January and March 2012 he and his team worked at the Hellisheidi geothermal power station, near Reykjavik. Despite its green reputation, geothermal energy—which uses hot groundwater to drive steam turbines—is not entirely emissions-free. Underground gases, especially CO 2 and hydrogen sulphide (H 2 S), often hitch a ride to the surface, too. The H 2 S, a noxious pollutant, must be scrubbed from the power-station exhaust before it is released, and the researchers worked with remainder, almost pure carbon dioxide.

 
They collected 175 tonnes of it, mixed it with a mildly radioactive tracker chemical, dissolved the mixture in water and pumped it into a layer of basalt half a kilometre below the surface.  They then kept an eye on what was happening via a series of monitoring wells. In the event, it took a bit less than two years for 95% of the injected CO 2 to be mineralised.
They followed this success by burying unscrubbed exhaust gas. After a few teething troubles, that worked too. The H 2 S reacted with iron in the basalt to make pyrites, so if exhaust gas were sequestered routinely, scrubbing might not be needed. This was enough to persuade Reykjavik Energy, the power station’s owners, to run a larger test that is going on at the moment and is burying nearly 10,000 tonnes of CO 2 and around 7,300 tonnes of H 2 S.

 
Whether CarbFix-like schemes will work at the scale required for fossil-fuel power stations remains to be seen. In these, the main additional pollutant is sulphur dioxide, which has different chemical characteristics from hydrogen sulphide. Scrubbing may therefore still be needed. Another constraint is the supply of basalt. Though this rock is common, it is found predominantly on the ocean floor. Indeed, geologically speaking, Iceland itself is a piece of ocean floor; it just happens to be above sea level. There are some large basaltic regions on dry land, but they are not necessarily in convenient places.

 
Nevertheless, if the will were there, pipelines from industrial areas could be built to carry exhaust gases to this basalt. It has not, after all, proved hard to do the reverse—carrying natural gas by pipeline whence it is found to where it is used. It is just a question of devising suitable sticks and carrots to assist the process. How much those sticks and carrots would cost is crucial. But Dr Matter’s proof of the principle of chemical sequestration in rock suggests it would be worth finding out.

%d bloggers like this: