Loading...

Follow Physics Lover on Feedspot

Continue with Google
Continue with Facebook
or

Valid
by National University of Singapore


This image shows that when the setup is rotated at 0.5 rpm, the experimental system on the left shows the hottest (white) part of the ring is fixed at the bottom after several seconds of motion. The reference on the right shows the hottest part of the ring has moved further round the ring in conjunction with its motio
A ground-breaking study conducted by researchers from the National University of Singapore (NUS) has revealed a method of using quantum mechanical wave theories to "lock" heat into a fixed position.




Ordinarily, a source of heat diffuses through a conductive material until it dissipates, but Associate Professor Cheng-Wei Qiu from the Department of Electrical and Computer Engineering at the NUS Faculty of Engineering and his team used the principle of anti-parity-time (APT) symmetry to show that it is possible to confine the heat to a small region of a metal ring without it spreading over time.
In the future, this newly demonstrated phenomenon could be used to control heat diffusion in sophisticated ways and optimize efficacy in systems that need cooling. The results of the study were published on 12 April 2019 in the journal Science.
Freezing the spread of heat
"Imagine a droplet of ink in a flowing stream. After a short amount of time you would see the ink spread and flow in the direction of the current. Now imagine if that ink droplet stayed the same size and in the same position as the water flowed around it. Effectively that is what we have accomplished with the spread of heat in our experiment," explained Assoc Prof Qiu.
The experimental setup of this study is two oppositely rotating metal rings, sandwiched together with a thin layer of grease. The rotating motion of the rings act like the flow of the stream in the scenario. When heat is injected at a point in the system, the thermal energy is able to stay in position because one rotating ring is coupled to the counter-rotating ring by the principles of APT symmetry.
The conditions of this experiment are quite precise in order for it to be successful. "From quantum mechanical theory, you can calculate the velocity needed for the rings. Too slow or too fast, and you will break the condition," said Assoc Prof Qiu. When the conditions are broken, the system acts conventionally, and the heat is carried forward as the ring rotates.



Play
00:00
00:22
Settings
PEnter fullscreenRotated at 0.5 rpm, the experimental system on the left shows the hottest (white) part of the ring is fixed at the bottom as it moves. The reference on the right shows the hottest part of the ring moving in conjunction with its motion. Credit: National University of Singapore

Bucking the trend
Applying the principles of APT symmetry to systems involving heat is a complete departure from the current school of thought in this area. "It's drastically different from the currently popular research topics. In this field, many groups are working on parity-time (PT) symmetry setups, and almost of them are looking at wave mechanics. This is the first time anyone has stepped out of the domain of waves, and shown that APT symmetry is applicable to diffusion-based systems such as heat," stated Assoc Prof Qiu.
This demonstration of a fixed area of heat within moving metal seems counterintuitive, as Assoc Prof Qiu admits, "Before this study, people actually thought this was a forbidden area, but we can explain all of it. It doesn't violate any laws of physics." In reality, the reason Assoc Prof Qiu and his team were able to control the heat was by introducing an extra degree of freedom into their ingenious experimental setup—the rotation of the rings
"For APT symmetry to become significant in a system, there must be some element of loss and gain within the setup—and they need to be balanced. In a traditional thermal diffusion system, APT symmetry is not consequential because there is no gain or loss degree of freedom. Hence, the mechanical rotation is the key player here," he explained.
Potential applications and next steps
Many modern technologies require the efficient removal of heat. Mechanical setups like engines, as well as computational and electrical components need to be effectively cooled. Currently, most technologies are cooled with a steady flow of liquid to take away the heat by convection.
"This experiment shows that we need to more careful when determining the flow rate and design of these systems," Assoc Prof Qiu stated. Whilst his experimental setup contained counter-rotating metal rings, the same principle could be applied to other setups in flux. "The perception is that the circulation will take away the heat simply, but it's not always necessarily so straightforward," he added.
Next, the team is looking to increase the size of their experiment. "At the moment our setup is in the range of centimetres, so we want to scale it up to the size of real motors or gearing systems. Gearing systems often have similar counter-rotating mechanisms which will generate heat, so we wish to apply theory to dissipate this heat more efficiently," Assoc Prof Qiu said.

source and credits: phys.org
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

credits: DEPOSITPHOTOS

Jayshree PandyaContributorCOGNITIVE WORLD
Contributor Group
AI & Big DataJayshree Pandya is Founder of Risk Group & Host of Risk Roundup.
Quantum physics, the study of the universe on an atomic scale, gives us a reference model to understand the human ecosystem in the discrete individual unit. It helps us understand how individual human behavior impacts collective systems and the security of humanity.
Metaphorically, we can see this in how a particle can act both like a particle or a wave. The concept of entanglement is at the core of much of applied quantum physics. The commonly understood definition of entanglement says that particles can be generated to have a distinct reliance on each other, despite any three-dimensional or 4-dimensional distance between the particles. What this definition and understanding imply is that even if two or more particles are physically detached with no traditional or measurable linkages, what happens to one still has a quantifiable effect on the other.
Now, individuals and entities across NGIOA are part of an entangled global system. Since the ability to generate and manipulate pairs of entangled particles is at the foundation of many quantum technologies, it is important to understand and evaluate how the principles of quantum physics translate to the survival and security of humanity.
If an individual human is seen as a single atom, is our behavior guided by deterministic laws? How does individual human behavior impact the collective human species? How is an individual representative of how collective systems, whether they be economic to security-based systems, operate?
Acknowledging this emerging reality, Risk Group initiated a much-needed discussion on Strategic Impact of Quantum Physics on Financial Industry with Joseph Firmage, Founder & Chairman at National Working Group on New Physics based in the United States, on Risk Roundup.

Quantum physics, the study of the universe on an atomic scale, gives us a reference model to understand the human ecosystem in the discrete individual unit. It helps us understand how individual human behavior impacts collective systems and the security of humanity.
Metaphorically, we can see this in how a particle can act both like a particle or a wave. The concept of entanglement is at the core of much of applied quantum physics. The commonly understood definition of entanglement says that particles can be generated to have a distinct reliance on each other, despite any three-dimensional or 4-dimensional distance between the particles. What this definition and understanding imply is that even if two or more particles are physically detached with no traditional or measurable linkages, what happens to one still has a quantifiable effect on the other.
Now, individuals and entities across NGIOA are part of an entangled global system. Since the ability to generate and manipulate pairs of entangled particles is at the foundation of many quantum technologies, it is important to understand and evaluate how the principles of quantum physics translate to the survival and security of humanity.
If an individual human is seen as a single atom, is our behavior guided by deterministic laws? How does individual human behavior impact the collective human species? How is an individual representative of how collective systems, whether they be economic to security-based systems, operate?
Acknowledging this emerging reality, Risk Group initiated a much-needed discussion on Strategic Impact of Quantum Physics on Financial Industry with Joseph Firmage, Founder & Chairman at National Working Group on New Physics based in the United States, on Risk Roundup.

source and credits: forbes
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Snehal Fernandes · 06-May-2019
A multinational team of astronomers, led by a Mumbai-based scientist, has solved a long-standing cosmic puzzle: How many years after the Big Bang did the universe achieve conditions that determined and eventually led to the universe we see around us today? The seven-member team, including 35-year-old theoretical physicist Girish Kulkarni from the Tata Institute of Fundamental Research (TIFR), Mumbai, found that the universe finished heating up 12.7 billion years ago by a process called reionisation that occurred due to light from young stars formed in the first galaxies. That’s 1.1 billion years after Big Bang. The paper was published in the April issue of the Monthly Notices of the Royal Astronomical Society, London. “Reionisation led to the universe we see around us,” said Dr Kulkarni, the paper’s lead author. “Without reionisation, the universe would be dark and cold with temperatures close to absolute zero (–273.15 degrees Celsius). The universe would not be hot as is today. There would be no galaxies or solar systems, and humans would not exist.”
Previous studies had suggested that reionisation occurred much earlier, within one billion years of the Big Bang, the accepted theoretical birth of our universe 13.8 billion years ago.
The ‘reionisation epoch’ is the period of time when ultraviolet light from the first galaxies ionised the gas in deep space, transforming the universe from a neutral to an ionised state in which it remains today. Simply put, ionisation is when an atom or a molecule acquires a negative or positive charge by gaining or losing electrons. An atom thus formed is called an ion. This takes places in combination with other chemical changes. In a neutral state, an atom will have an equal number of protons (positively-charged sub-atomic particles) and negatively-charged electrons.


Researchers said understanding the thermal evolution of the universe since the Big Bang fills a gap in our understanding of the universe’s history. The study findings will also aid future experiments such as the 10-nation Square Kilometre Array (SKA) of which India is a member and which aims to detect neutral hydrogen from the early universe to uncover as-yet-unseen epochs of cosmic evolution.
“Late reionisation is also good news for future experiments that aim to detect the neutral hydrogen which is important to uncover cosmic evolution from the early universe,” Kulkarni said. “The later the reionisation, the easier it will be for experiments such as SKA to succeed.”
Knowing the accurate time of reionisation is important, said researchers, to better understand the formation and characteristics of the first galaxies.
“The findings are very significant,” said George Becker, professor at the department of physics and astronomy, University of California, Riverside, who was not involved in the study and whose research interests include cosmic reionisation. “The authors have shown, for the first time, that reionisation, one of the most significant events in the universe, may have lasted far longer than previously thought. When reionisation occurred carries implications for the properties of the first galaxies.”
Fifty million years after the Big Bang, the universe – mostly made of gas at temperatures a few degrees above absolute zero – was dark, and devoid of bright stars and galaxies. During reionisation, the universe transitioned out of these cosmic dark ages and is today filled with hot and ionised hydrogen gas at a temperature of around 10,000 degrees Celsius.
Hydrogen gas dims light from distant galaxies much like streetlights are dimmed by fog on a winter morning. By observing this dimming of a special type of bright galaxies called quasars, astronomers can study conditions in the early universe.
In the last few years, observations of this specific dimming pattern suggested that this fogginess of the universe varies significantly from one part of the universe to another, but the reason behind these variations was unknown.
To find the cause behind these variations, Kulkarni and his team used state-of-the art computer simulations of intergalactic gas and radiation performed on supercomputers based at universities of Cambridge, Durham and Paris. “We found that variations in fogginess result from large regions full of cold hydrogen gas present in the universe when it was just 1.1 billion years old. This enabled us to pinpoint when reionisation ended,” said Kulkarni, department of theoretical physics, TIFR. “This is how today, 13.8 billion years after the Big Bang, the universe is bathed in light from stars in a variety of galaxies, and the gas is a thousand times hotter.”
Researchers said that the study opens up the way to observe an era in the universe’s past that has not yet been seen byastronomers, and also solves the puzzle of why the universe is so different today as compared to when it was formed.
Professor Abraham Loeb, chair of department of astronomy, and Frank B. Baird Jr. Professor of Science at Harvard University, who was not involved with the study, said the research findings are “important” and “explains consistently several independent observations of the infant universe”.


“The paper demonstrates that the process of reionisation completed about a billion years after the Big Bang, when the universe was 7% of its current age,” said Loeb. “Computer simulations show consistency with measurement of the large variations in the islands of neutral hydrogen as probed through the Lyman-alpha absorption of quasar light, and measurements of the column of free electrons inferred from scattering of the cosmic microwave background.”

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Knowing how to solve physics problems is a process that can be learned.

May, 11th 2019

Albert Einstein once said: “The formulation of the problem is often more essential than its solution ... 

credits:deposit photos
Former Apple CEO Steve Jobs said, "When you start looking at a problem and it seems really simple, you don’t really understand the complexity of the problem. Then you get into the problem, and you see that it’s really complicated, and you come up with all these convoluted solutions. That’s sort of the middle, and that’s where most people stop… But the really great person will keep on going and find the key, the underlying principle of the problem — and come up with an elegant, really beautiful solution that works."
Solving problems, whether in physics or other disciplines, can be learned. Rafis Abazov on the TopUniversities website promotes the IDEAL methodology for his students: Identify, Define, Examine, Act and Look.
1. Identify the problem - identify the nature of the problem by visualizing the physical situation, and translating the written information in the problem into mathematical variables. Draw a diagram showing the objects, and their motions or interactions. For example, an interaction can be two objects connected by a rope.
2. Define the main elements of the problem - on the diagram, label all the known and unknown information. This allows you to translate between verbal, visual, and mathematical modes and their concrete manifestations of words, pictures, and equations. Be sure to include each item's associated units, this will help you identify what is being solved for.
3. Examine possible solutions - once the physical situation has been visualized and diagrammed, and the numerical information has been extracted from the problem statement, students can either use their background knowledge of physics and physics formulae or else they can seek out that information in class notes, instructional packets, textbooks or online resources.


It sometimes helps to work backward by saying, "I want the answer to Z, but if I knew Y, I could find Z, and if I knew X ... and so forth until you get back to something you are given in the original problem statement.

4. Act on resolving the problem - this often includes working through previous problems that are similar, and observing the solution process. Then, the known information is substituted into the identified formulae to solve for the unknown quantity. Always solve symbolically first before putting in the actual quantities. This allows you to make sure your answer makes sense in the physical world.
5. Look for lessons to be learned - by evaluating the solution process, you can formulate the lessons you've learned so that the next problem-solving project will be more effective.
A Solution in a Dream
Sometimes, other hands are at work in the solving of problems. Take chemist August Kekule's solution to the structure of the benzene molecule, and hence the structure of all aromatic compounds. After long struggling with the problem, Kekule took a nap. He dreamed of a snake that was swallowing its own tail, and he awoke with the realization that the shape of the benzene molecule was a ring.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Schematic of a graphene-based two-photon gate. Credit: University of Vienna, created by Thomas Rögelsperger
A novel material that consists of a single sheet of carbon atoms could lead to new designs for optical quantum computers. Physicists from the University of Vienna and the Institute of Photonic Sciences in Barcelona have shown that tailored graphene structures enable single photons to interact with each other. The proposed new architecture for quantum computer is published in the recent issue of npj Quantum Information.
Photons barely interact with the environment, making them a leading candidate for storing and transmitting quantum information. This same feature makes it especially difficult to manipulate information that is encoded in photons. In order to build a photonic quantum computer, one photon must change the state of a second. Such a device is called a quantum logic gate, and millions of logic gates will be needed to build a quantum computer. One way to achieve this is to use a so-called 'nonlinear material' wherein two photons interact within the material. Unfortunately, standard nonlinear materials are far too inefficient to build a quantum logic gate.
It was recently realized that nonlinear interactions can be greatly enhanced by using plasmons. In a plasmon, light is bound to electrons on the surface of the material. These electrons can then help the photons to interact much more strongly. However, plasmons in standard materials decay before the needed quantum effects can take place.
In their new work, the team of scientists led by Prof. Philip Walther at the University of Vienna propose to create plasmons in graphene. This 2-D material discovered barely a decade ago consists of a single layer of carbon atoms arranged in a honeycomb structure, and, since its discovery, it has not stopped surprising us. For this particular purpose, the peculiar configuration of the electrons in graphene leads to both an extremely strong nonlinear interaction and plasmons that live for an exceptionally long time.
Schematic of a graphene-based two-photon gate. Credit: University of Vienna, created by Thomas Rögelsperger
In their proposed graphene quantum logic gate, the scientists show that if single plasmons are created in nanoribbons made out of graphene, two plasmons in different nanoribbons can interact through their electric fields. Provided that each plasmon stays in its ribbon multiple gates can be applied to the plasmons which is required for quantum computation. "We have shown that the strong nonlinear interaction in graphene makes it impossible for two plasmons to hop into the same ribbon," says Irati Alonso Calafell, first author of the study.
Their proposed scheme makes use of several unique properties of graphene, each of which has been observed individually. The team in Vienna is currently performing experimental measurements on a similar graphene-based system to confirm the feasibility of their gate with current technology. Since the gate is naturally small, and operates at room temperature it should readily lend itself to being scaled up, as is required for many quantum technologies.
source and credits: phys.org
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
credits:factidiot instagram
Scientists have developed a retinal implant that can restore lost vision in rats, and are planning to trial the procedure in humans later this year.
The implant, which converts light into an electrical signal that stimulates retinal neurons, could give hope to millions who experience retinal degeneration – including retinitis pigmentosa – in which photoreceptor cells in the eye begin to break down, leading to blindness.

The retina is located at the back of the eye, and is made up of millions of these light-sensitive photoreceptors. But mutations in any one of the 240 identified genes can lead to retinal degeneration, where these photoreceptor cells die off, even while the retinal neurons around them are unaffected.
Because the retinal nerves remain intact and functional, previous research has looked at treating retinitis pigmentosa with bionic eye devices that stimulate the neurons with lights, while other scientists have investigated using CRISPR gene editing to repair the mutations that cause blindness.
Now, a team led by the Italian Institute of Technology has developed a new approach, with a prosthesis implanted into the eye that serves as a working replacement for a damaged retina.
The implant is made from a thin layer of conductive polymer, placed on a silk-based substrate and covered with a semiconducting polymer.
The semiconducting polymer acts as a photovoltaic material, absorbing photons when light enters the lens of the eye. When this happens, electricity stimulates retinal neurons, filling in the gap left by the eye's natural but damaged photoreceptors.
To test the device, the researchers implanted the artificial retina into the eyes of rats bred to develop a rodent model of retinal degeneration – called Royal College of Surgeons (RCS) rats.
After the rats had healed from the operation 30 days later, the researchers tested how sensitive they were to light – called the pupillary reflex – compared to healthy rats and untreated RCS rats.
At the low intensity of 1 lux – a bit brighter than the light from a full moon – the treated rats weren't much more responsive than untreated RCS rats.
But as the light increased to around 4–5 lux – about the same as a dark twilight sky – the pupillary response of treated rats was largely indistinguishable from healthy animals.
When they retested the rats at six and 10 months after surgery, the implant was still effective in the rats – although all the rats in the tests (including the treated rats, the healthy animals, and the RCS controls) had suffered minor vision impairment due to being older.
Using positron emission tomography (PET) to monitor the rats' brain activity during the light sensitivity tests, the researchers saw an increase in the activity of the primary visual cortex, which processes visual information.


Based on the results, the team concludes that the implant directly activates "residual neuronal circuitries in the degenerate retina", but further research will be required to explain exactly how the stimulation works on a biological level.
"[T]he detailed principle of operation of the prosthesis remains uncertain," they explain in their paper.
While there are no guarantees that the results seen in rats will translate to people, the team is hopeful that it will – and from the sounds of things, it won't be too long until we find out.
"We hope to replicate in humans the excellent results obtained in animal models," says one of the researchers, ophthalmologist Grazia Pertile from the Sacred Heart Don Calabria in Negrar, Italy.
"We plan to carry out the first human trials in the second half of this year and gather preliminary results during 2018. This [implant] could be a turning point in the treatment of extremely debilitating retinal diseases."
The findings are reported in Nature Materials.

credits: sciencealert.com
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Unlike classical particles, quantum particles can travel in a quantum superposition of different directions. Mile Gu, together with researchers from Griffith harnessed this phenomena to design quantum devices that can generate a quantum superposition of all possible futures. Credit: NTU, Singapore.
In the 2018 movie Avengers: Infinity War, a scene featured Dr. Strange looking into 14 million possible futures to search for a single timeline in which the heroes would be victorious. Perhaps he would have had an easier time with help from a quantum computer. A team of researchers from Nanyang Technological University, Singapore (NTU Singapore) and Griffith University in Australia have constructed a prototype quantum device that can generate all possible futures in a simultaneous quantum superposition.
"When we think about the future, we are confronted by a vast array of possibilities," explains Assistant Professor Mile Gu of NTU Singapore, who led development of the quantum algorithm that underpins the prototype "These possibilities grow exponentially as we go deeper into the future. For instance, even if we have only two possibilities to choose from each minute, in less than half an hour there are 14 million possible futures. In less than a day, the number exceeds the number of atoms in the universe." What he and his research group realised, however, was that a quantum computer can examine all possible futures by placing them in a quantum superposition – similar to Schrödinger's famous cat, which is simultaneously alive and dead.
To realise this scheme, they joined forces with the experimental group led by Professor Geoff Pryde at Griffith University. Together, the team implemented a specially devised photonic quantum information processor in which the potential future outcomes of a decision process are represented by the locations of photons – quantum particles of light. They then demonstrated that the state of the quantum device was a superposition of multiple potential futures, weighted by their probability of occurrence.






A picture of the Experimental Device used for the experiment. Credit: Griffith’s University
"The functioning of this device is inspired by the Nobel Laureate Richard Feynman," says Dr. Jayne Thompson, a member of the Singapore team. "When Feynman started studying quantum physics, he realized that when a particle travels from point A to point B, it does not necessarily follow a single path. Instead, it simultaneously transverses all possible paths connecting the points. Our work extends this phenomenon and harnesses it for modelling statistical futures."
The machine has already demonstrated one application—measuring how much our bias towards a specific choice in the present impacts the future. "Our approach is to synthesise a quantum superposition of all possible futures for each bias." explains Farzad Ghafari, a member of the experimental team, "By interfering these superpositions with each other, we can completely avoid looking at each possible future individually. In fact, many current artificial intelligence (AI) algorithms learn by seeing how small changes in their behaviour can lead to different future outcomes, so our techniques may enable quantum enhanced AIs to learn the effect of their actions much more efficiently."
The team notes while their present prototype simulates at most 16 futures simultaneously, the underlying quantum algorithm can in principle scale without bound. "This is what makes the field so exciting," says Pryde. "It is very much reminiscent of classical computers in the 1960s. Just as few could imagine the many uses of classical computers in the 1960s, we are still very much in the dark about what quantum computers can do. Each discovery of a new application provides further impetus for their technological development."
The work is featured in a forthcoming paper in the journal Nature Communications.
credits and via: phys.org
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

March 22, 2019 by The Associated Press

X-rays stream off the sun in this image showing observations from by NASA's Nuclear Spectroscopic Telescope Array, or NuSTAR, overlaid on a picture taken by NASA's Solar Dynamics Observatory (SDO). Credit: NASA
Space weather forecaster Jonathan Lash says a solar flare that left the sun this week is due to arrive at Earth around 2 p.m. EDT Saturday.
The National Oceanic and Atmospheric Administration scientist says the flare is too weak and any light show would be limited to Alaska, Canada, Iceland, Norway and other far northern spots.
Lash says the event is unusual but not rare. That's because it is happening during the quiet four-year solar minimum. It's unlikely to cause power or communication problems on Earth, nor will many people get a chance to see shimmering auroras.
credits: phys.org
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
These findings, published in The Astrophysical Journal, increases the number of black holes known at that epoch considerably.

Published: 19th March 2019 12:30 PM
credits:wireduk



WASHINGTON: Astronomers have discovered 83 quasars powered by supermassive black holes 13 billion light-years away from the Earth, from a time when the universe was less than 10 per cent of its present age.
"It is remarkable that such massive dense objects were able to form so soon after the Big Bang," said Michael Strauss, a professor at Princeton University in the US.
"Understanding how black holes can form in the early universe, and just how common they are, is a challenge for our cosmological models," Strauss said in a statement.
These findings, published in The Astrophysical Journal, increases the number of black holes known at that epoch considerably, and reveals, for the first time, how common they are early in the universe's history.

In addition, it provides new insight into the effect of black holes on the physical state of gas in the early universe in its first billion years.
Supermassive black holes, found at the centres of galaxies, can be millions or even billions of times more massive than the Sun.
While they are prevalent today, it is unclear when they first formed, and how many existed in the distant early universe.
A supermassive black hole becomes visible when gas accretes onto it, causing it to shine as a "quasar."
Previous studies have been sensitive only to the very rare, most luminous quasars, and thus the most massive black holes.
The new discoveries probe the population of fainter quasars, powered by black holes with masses comparable to most black holes seen in the present-day universe.
The team used data taken with "Hyper Suprime-Cam" (HSC) instrument, mounted on the Subaru Telescope of the National Astronomical Observatory of Japan, which is located on the summit of Maunakea in Hawaii.
The researchers selected distant quasar candidates from the sensitive HSC survey data.
They then carried out an intensive observational campaign to obtain spectra of those candidates, using three telescopes: the Subaru Telescope; the Gran Telescopio Canarias on the island of La Palma in the Canaries, Spain; and the Gemini South Telescope in Chile.
The survey revealed 83 previously unknown very distant quasars.
Together with 17 quasars already known in the survey region, the researchers found that there is roughly one supermassive black hole per cubic giga-light-year.
The sample of quasars in this study are about 13 billion light-years away from the Earth.
In other words, we are seeing them as they existed 13 billion years ago.
As the Big Bang took place 13.8 billion years ago, we are effectively looking back in time, seeing these quasars and supermassive black holes as they appeared only about 800 million years after the creation of the universe.

source:newindianexpress
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Which planet is closest to the Earth? The answer is not Venus.
Until now, we have read in our science books or even if searching on Google, the answer suggests that Venus is the closest neighbor to Earth. But, now, scientists have made some shocking revelations suggesting that Mercury is indeed the closest planet to the Earth.
Presenting their outcomes in magazine Physics today, scientists from NASA, Los Alamos National Lab, and the U.S. Army, proposed a new model of the planets’ orbit shuffles things around, calculating that Earth’s closest neighbor, on average, is actually Mercury. In fact, it says that every other planet in the solar system’s nearest neighbor is Mercury as well.
Normally, we calculate the average distance from the planet to the Sun. The Earth’s average distance is 1 astronomical unit (AU), while Venus’ is around 0.72 AU. If you subtract one from the other, you calculate the average distance from Earth to Venus as 0.28 AU, the smallest distance for any pair of planets.
But a trio of scientists realized that this isn’t an accurate way to calculate the distances to planets. After all, Earth spends just as much time on the opposite side of its orbit from Venus, placing it 1.72 AU away. One must instead average the distance between every point along one planet’s orbit and every point along the other planet’s orbit.
The scientists ran a simulation based on two assumptions:
  1. the planets’ orbits were approximately circular.
  2. their orbits weren’t at an angle relative to one another.
In the commentary, the researchers devised a new mathematical technique, called the point-circle method, to measure the distances between planets. This method averages the distance between a bunch of points on each planet’s orbit, thereby taking time into consideration.
A simulation of an Earth year’s worth of orbits by the terrestrial planets begins to reveal that Mercury (gray in orbital animation) has the smallest average distance from Earth (blue) and is most frequently Earth’s nearest neighbor. A longer run of the simulation can be seen on YouTube. In addition, planetary geoscientist David Rothery ran a solar system simulation for the BBC radio program More or Less and came up with similar results.
When measured that way, Mercury was closest to Earth most of the time. Not only that, but Mercury was also the closest planet to Saturn, and Neptune, and all of the other planets. The researchers checked their findings by mapping out where the planets were in their orbits every 24 hours for 10,000 years.
Steven Beckwith, the director of the Space Science Laboratory and professor of astronomy at UC Berkeley, who was not part of the commentary said, “Suppose you live in a house where the people who live next door to you spend half the year someplace, maybe you live in Wisconsin and your nearest neighbors spend seven months of the long winters in Florida. During the winter, the people in the next house over would be closer to you.”

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview