Showing posts with label Nature.com. Show all posts
Showing posts with label Nature.com. Show all posts

Saturday, February 24

A Preordained Universe Implied by Quantum Theory


Was there ever any choice in the Universe being as it is? Albert Einstein could have been wondering about this when he remarked to mathematician Ernst Strauss: “What I’m really interested in is whether God could have made the world in a different way; that is, whether the necessity of logical simplicity leaves any freedom at all.”

US physicist James Hartle, who died earlier this year aged 83, made seminal contributions to this continuing debate. Early in the twentieth century, the advent of quantum theory seemed to have blown out of the water ideas from classical physics that the evolution of the Universe is ‘deterministic’.

Hartle contributed to a remarkable proposal that, if correct, completely reverses a conventional story about determinism’s rise with classical physics, and its subsequent fall with quantum theory. A quantum Universe might, in fact, be more deterministic than a classical one — and for all its apparent uncertainties, quantum theory might better explain why the Universe is the one it is, and not some other version.     READ MORE...

Monday, January 29

Technologies to Wach


From protein engineering and 3D printing to detection of deepfake media, here are seven areas of technology that Nature will be watching in the year ahead.

Deep learning for protein design
Two decades ago, David Baker at the University of Washington in Seattle and his colleagues achieved a landmark feat: they used computational tools to design an entirely new protein from scratch. ‘Top7’ folded as predicted, but it was inert: it performed no meaningful biological functions. Today, de novo protein design has matured into a practical tool for generating made-to-order enzymes and other proteins. “It’s hugely empowering,” says Neil King, a biochemist at the University of Washington who collaborates with Baker’s team to design protein-based vaccines and vehicles for drug delivery. “Things that were impossible a year and a half ago — now you just do it.”  READ MORE...

Saturday, December 23

Our Preordained Universe


Was there ever any choice in the Universe being as it is? Albert Einstein could have been wondering about this when he remarked to mathematician Ernst Strauss: “What I’m really interested in is whether God could have made the world in a different way; that is, whether the necessity of logical simplicity leaves any freedom at all.”

US physicist James Hartle, who died earlier this year aged 83, made seminal contributions to this continuing debate. Early in the twentieth century, the advent of quantum theory seemed to have blown out of the water ideas from classical physics that the evolution of the Universe is ‘deterministic’.

Hartle contributed to a remarkable proposal that, if correct, completely reverses a conventional story about determinism’s rise with classical physics, and its subsequent fall with quantum theory. A quantum Universe might, in fact, be more deterministic than a classical one — and for all its apparent uncertainties, quantum theory might better explain why the Universe is the one it is, and not some other version.  READ MORE...

Wednesday, December 6

More Than 1,000 Qubits



One of IBM’s latest quantum processor has improved the reliability of its qubits. Credit: Ryan Lavine for IBM



IBM has unveiled the first quantum computer with more than 1,000 qubits — the equivalent of the digital bits in an ordinary computer. But the company says it will now shift gears and focus on making its machines more error-resistant rather than larger.

For years, IBM has been following a quantum-computing road map that roughly doubled the number of qubits every year. The chip unveiled on 4 December, called Condor, has 1,121 superconducting qubits arranged in a honeycomb pattern. It follows on from its other record-setting, bird-named machines, including a 127-qubit chip in 2021 and a 433-qubit one last year.

Quantum computers promise to perform certain computations that are beyond the reach of classical computers. They will do so by exploiting uniquely quantum phenomena such as entanglement and superposition, which allow multiple qubits to exist in multiple collective states at once.  READ MORE...

Friday, November 17

Google's DeepMind AI


Artificial-intelligence (AI) firm Google DeepMind has turned its hand to the intensive science of weather forecasting — and developed a machine-learning model that outperforms the best conventional tools as well as other AI approaches at the task.

The model, called GraphCast, can run from a desktop computer and makes more accurate predictions than conventional models in minutes rather than hours.

“GraphCast currently is leading the race amongst the AI models,” says computer scientist Aditya Grover at University of California, Los Angeles. The model is described1 in Science on 14 November.  READ MORE...

Wednesday, November 15

Chinese Robot Producing Oxygen from Water


Researchers in China have developed a robot chemist powered by artificial intelligence (AI) that might be able to extract oxygen from water on Mars. The robot uses materials found on the red planet to produce catalysts that break down water, releasing oxygen. 

The idea could complement existing oxygen-generating technologies or lead to the development of other catalysts able to synthesize useful resources on Mars.

“If you think about the challenge of going to Mars, you have to work with local materials,” says Andy Cooper, a chemist at the University of Liverpool, UK. “So I can see the logic behind it.”   READ MORE...

Tuesday, September 27

Professors Trained At Elite Universities

One in eight tenure-track professors at US institutions got their PhDs from just five elite 
US universities, according to a study.Credit: Paul Marotta/Getty



US universities hire most of their tenure-track faculty members from the same handful of elite institutions, according to a study1. The finding suggests that prestige is overvalued in hiring decisions and that academic researchers have little opportunity to obtain jobs at institutions considered more elite than the ones at which they were trained.

Specifically, the study, published in Nature on 21 September, shows that just 20% of PhD-granting institutions in the United States supplied 80% of tenure-track faculty members to institutions across the country between 2011 and 2020 (see ‘Hiring bias’). 

No historically Black colleges and universities (HBCUs) or Hispanic-serving institutions (HSIs) were among that 20%, says Hunter Wapman, a computer scientist at the University of Colorado Boulder (CU Boulder) and a co-author of the paper. One in eight US-trained tenure-track faculty members got their PhDs from just five elite universities: the University of California, Berkeley; Harvard University in Cambridge, Massachusetts; the University of Michigan in Ann Arbor; Stanford University in California; and the University of Wisconsin–Madison.

“It’s not surprising, but it is jarring” to see these data, says Leslie Gonzales, a social scientist who studies higher education at Michigan State University in East Lansing. 

“There’s so much brilliant work and training of brilliant scholars that’s happening outside of this tiny sliver” of institutions, including at HBCUs and HSIs — and it’s being overlooked, she says.  READ MORE...

Wednesday, March 2

Higher Emotional Awareness

 


Abstract

The tendency to reflect on the emotions of self and others is a key aspect of emotional awareness (EA)—a trait widely recognized as relevant to mental health. However, the degree to which EA draws on general reflective cognition vs. specialized socio-emotional mechanisms remains unclear. Based on a synthesis of work in neuroscience and psychology, we recently proposed that EA is best understood as a learned application of domain-general cognitive processes to socio-emotional information. In this paper, we report a study in which we tested this hypothesis in 448 (125 male) individuals who completed measures of EA and both general reflective cognition and socio-emotional performance. As predicted, we observed a significant relationship between EA measures and both general reflectiveness and socio-emotional measures, with the strongest contribution from measures of the general tendency to engage in effortful, reflective cognition. This is consistent with the hypothesis that EA corresponds to the application of general reflective cognitive processes to socio-emotional signals.


Introduction
Trait differences in emotional awareness (EA) have been the topic of a growing body of empirical work in psychology and psychiatry. Individuals with high EA report granular emotional experiences and perceive similar experiences in others, often promoting more adaptive social and emotional functioning (for a review, see1; for related work, see2). Current theoretical models posit that the tendency to consciously reflect on the emotions of self and others (e.g., their causes, associated sensations, and how they can be regulated) is a key aspect of EA3, as well as of related constructs such as emotional intelligence4,5,6,7 and alexithymia8,9,10. As measured by the Levels of Emotional Awareness Scale (LEAS;11,12), multiple studies suggest that EA is an important determinant of adaptive emotional functioning. High EA has been linked to emotion recognition abilities and openness to experience, among other adaptive skills11,13,14,15,16,17,18,19. Low EA has also been associated with multiple affective disorders20,21,22,23,24,25. The neurocognitive basis of EA is also an important question within both basic science and clinical research, with a growing number of studies on its developmental basis16,26,27 and neural correlates (e.g., for a review, see28; for more recent studies, see23,29,30,31,32,33,34,35,36,37).

One important unanswered question pertains to the degree to which the tendency to reflect on emotion in EA depends on domain-general reflective cognitive processes vs. specialized socio-emotional mechanisms. Some models make strong distinctions between emotional and cognitive processes and suggest that the brain contains specialized emotional mechanisms38,39; and some neuroscientific studies also suggest the presence of brain regions selectively engaged by social cognition40,41,42,43,44. In contrast, other cognitive and neural models suggest less separability between socio-emotional and cognitive process1,45,46,47,48,49. In a recent review50, we drew on work within evolutionary, developmental, and cognitive neuroscience to argue that EA may have an important dependence on domain-general cognitive processes. 

Specifically, EA appears to require holding emotional information in mind, integrating it with other available information in perception and memory, and using this information to reflectively plan adaptive courses of action (especially in social situations). While these abilities may be constrained by cognitive capacity (e.g., working memory span, IQ;32), it is suggested that trait differences in EA may further depend on the tendency to engage these reflective processes, independent of whether latent cognitive capacity is high or low. In this view, EA involves the application of effortful cognitive processes to emotion-related information (e.g., interoceptive information within oneself, facial, postural, and vocal cues in others, context cues, etc.), which may be facilitated during development by prepared learning and automatic attention biases toward socio-affective signals. The domain-general processes under discussion are “reflective” in the sense that they operate on mental contents in an integrative, slow, and deliberate manner—often reducing the chances of responding maladaptively in emotionally charged situations. However, the degree to which EA depends on domain-general reflective cognitive processes requires further empirical testing.  READ MORE...

Friday, February 25

Waiting For A Star To Explode

Supernova 1987A appears as a bright spot near the centre of this image of the Tarantula nebula, taken by the ESO Schmidt Telescope.Credit: ESO

Masayuki Nakahata has been waiting 35 years for a nearby star to explode.

He was just starting out in science the last time it happened, in February 1987, when a dot of light suddenly appeared in the southern sky. This is the closest supernova seen during modern times; and the event, known as SN 1987A, gained worldwide media attention and led to dramatic advances in astrophysics.

Nakahata was a graduate student at the time, working on what was then one of the world’s foremost neutrino catchers, the Kamiokande-II detector at the Kamioka Underground Observatory near Hida, Japan. He and a fellow student, Keiko Hirata, spotted evidence of neutrinos pouring out of the supernova — the first time anyone had seen these fundamental particles originating from anywhere outside the Solar System.

Now, Nakahata, a physicist at the University of Tokyo, is ready for when a supernova goes off. He is head of the world’s largest neutrino experiment of its kind, Super-Kamiokande, where upgrades to its supernova alert system were completed late last year. The improvements will enable the observatory’s computers to recognize when it is detecting neutrinos from a supernova, almost in real time, and to send out an automated alert to conventional telescopes worldwide.

Astronomers will be waiting. “It’s gonna give everybody the willies,” says Alec Habig, an astrophysicist at the University of Minnesota, Duluth. Early warning from Super-Kamiokande and other neutrino observatories will trigger robotic telescopes — in many cases responding with no human intervention — to swivel in the direction of the dying star to catch the first light from the supernova, which will come after the neutrino storm.

But when the light arrives, it could be too much of a good thing, says Patrice Bouchet, an astrophysicist at the University of Paris-Saclay who made crucial observations of SN 1987A, from the La Silla Observatory in Chile. The brightest events, which would shine brighter than a full Moon and be visible during the day, would overwhelm the ultra-sensitive but delicate sensors in the telescopes used by professional astronomers.

Thursday, August 19

Thought and Metabolism

To regulate adaptive behaviour, the brain relies on a continuous flow of cognitive and memory-related processes that require a constant energy supply. Weighing around 1,200 grams in women and 1,300 grams in men, on average, the brain consumes around 90 grams, or 340 kilocalories’ worth, of glucose per day, accounting for around half of the body’s glucose demand1,2

The tight integration of metabolic and cognition-related signals might aid the matching of the brain’s energy supply to its energy needs, by optimizing foraging behaviour and efforts to limit energy expenditure. 

The synchronization of glucose supply with brain activity has so far been considered a function of a structure called the hypothalamus, at the base of the brain. Writing in Nature, Tingley et al.3 provide evidence in rats for the role of another brain region, called the hippocampus, which is typically implicated in memory and navigation, in this equation (Fig. 1).


Figure 1 | Brain signals that regulate glucose levels in the body periphery. The hypothalamus in the brain helps to regulate glucose concentrations in the blood and in the interstitial fluid that surrounds cells in the body. This hypothalamic (feedback-mediated) regulation is activated, for example, during stress. Tingley et al.3 provide evidence in rats that another brain structure, the hippocampus, also regulates peripheral glucose concentrations. In the hippocampus, oscillatory patterns — called sharp wave-ripples (SPW-Rs) — emerge in the collective electrical potential across the membranes of neurons. They seem to signal, by way of a region called the lateral septum, to the hypothalamus to produce dips in interstitial glucose concentration about 10 minutes later. The feedback mechanism in this regulatory loop is unknown (dashed arrow). Given that hippocampal SPW-Rs are a hallmark of the reprocessing of previous experiences, they might thus control the brain’s energy supply during a ‘thought-like’ mode.

The hippocampus receives many types of sensory and metabolic information, and projections from neuronal cells in the hippocampus extend to various parts of the brain, including the hypothalamus. Thus, the hippocampus might indeed represent a hub in which metabolic signals are integrated with cognitive processes3

To examine this possibility, Tingley and colleagues recorded oscillatory patterns called sharp wave-ripples (SPW-Rs), reflecting changes in electrical potential across the cell membranes of neuronal-cell ensembles in the hippocampi of rats. They did this while using a sensor inserted under the skin of the animals’ backs to continuously measure glucose levels in the interstitial fluid surrounding the cells there.  READ MORE