So, Let’s delve into the new scientific research that happened in the past month or so and explore the latest technologies that are being created and breakthroughs that were achieved in this field.
In current blog, you will read about the following science events of the month:
- India to send astronauts into Space, Names revealed!
- First private lander sets foot on the Moon
- CERN planning faster particle accelerator, more massive than LHC
- New ocean discovered in our Solar System
- Unexpected alarming Measles outbreak in West
- Quantum computing may adopt Qumodes instead of Qubits
- With Apple launching vision pro headsets, Scientists concerned about the effects of VR, AR tech on our brain
- First ever study carried on environmental costs of Artificial Intelligence
- New AI learning language through babies’ eyes
- World’s first Genetically Modified Banana approved for consumption
- Scientists gathering e-DNA across the world
- Introducing Beef-Rice, a new food that can help overcome malnutrition
- Scientists working on self-fertilizing plants
India to send astronauts into Space, Names revealed!
Prime Minister Narendra Modi has announced the names of four astronauts for India's first crewed space mission, ISRO Gaganyaan. The four astronauts are Group Captain Prashanth Balakrishnan Nair, Angad Prathap, Ajit Krishnan, and Shubanshu Shukla, all wing commanders or group captains in the Indian Air Force (IAF).
![]() |
Image generated by Mufawad using AI |
They have extensive experience working as test pilots, making them well-prepared to respond quickly in unforeseen situations. The selection process took place at the IAF's Institute of Aerospace Medicine. Only three of the astronauts will go to space as part of the Gaganyaan mission.
ISRO and Glavkosmos (Russian Space Company) signed a memorandum of understanding in June 2019, and the astronauts trained at Russia's Yuri Gagarin Cosmonaut Training Centre from February 2020 to March 2021.
NASA will also train an Indian astronaut for a mission to the International Space Station by the end of 2024. The Gaganyaan mission will demonstrate India's human spaceflight capability by launching astronauts to an orbit 400 kilometers above Earth for a 3-day mission, followed by safely landing them in Indian sea waters.
Courtesy: Indian Express
First private lander sets foot on the Moon
The first private spacecraft, Odysseus, successfully landed on the Moon on 22nd February, marking a historic lunar accomplishment. Built by Intuitive Machines in Houston, Texas, Odysseus became the first US lunar lander since 1972, when the last crew of Apollo astronauts visited the Moon.
The spacecraft faced challenges before landing, including malfunctioning laser rangefinders that were supposed to guide its autonomous journey to the lunar surface. Mission engineers had to upload a software patch to enable it to use a secondary laser provided by NASA instead.
![]() |
Image generated by Mufawad using AI |
The exact state of the spacecraft remained unclear immediately after its landing, but it sent a faint signal back to mission control in Houston, indicating that at least some portion of it had survived the touchdown.
The landing provides a boost to efforts by commercial firms and the US government aiming to go to the Moon. NASA paid for a large proportion of the private mission and is counting on companies like Intuitive Machines to help ferry equipment and scientific instruments to the Moon in preparation for sending astronauts there.
Odysseus was launched on 15 February from Cape Canaveral in Florida, headed directly for the Moon rather than going for orbital motion around the Earth. It fired its engine several times to set itself on the correct trajectory and transmitted images of Earth and the Moon.
It entered lunar orbit on 21 February, initially circling 92 kilometers above the surface before making its landing attempt. The spacecraft fired its engines to descend to a lower altitude, then moved into a series of autonomous manoeuvres in which it re-oriented itself and began assessing the craters and boulders underneath it.
NASA is interested in the Moon's south pole because the region's dirt and shadowy craters might contain ice that could provide fuel and other resources for future lunar explorers. Most lunar landers have visited the Moon's equatorial regions, with the only mission that has landed near the south pole being India's Chandrayaan-3, which touched down last August.
Odysseus is the second launch of NASA's Commercial Lunar Payload Services (CLPS) programme, which aims to incentivize small aerospace companies to fly payloads for NASA and others to the Moon at low cost.
NASA plans to use these CLPS flights to test technologies for its own missions to the Moon, including plans to send astronauts to the lunar south pole as soon as 2026.
Courtesy: The Nature
CERN planning faster particle accelerator, more massive than LHC
CERN, the European Organization for Nuclear Research, is aiming to build the Future Circular Collider (FCC), which could cost up to 20 billion Euros. The FCC is a particle accelerator that will be within a 91-kilometer ring, three times the circumference of the Large Hadron Collider.The project has been funded by 23 member states, with Germany, the United Kingdom, France, and Italy being the biggest contributors while cost optimization and project management expected to pose a significant challenge to the project.
![]() |
Image generated by Mufawad using AI |
The feasibility study for the FCC is expected to be completed in 2025, with the aim of moving towards construction by 2045. The FCC is conceived as a two-stage machine, starting with electron-positron and then transitioning to proton-proton after ten years of operation. This process is similar to what happened with the LHC, starting as an electron-positron and then being converted into a proton-proton collider.
The €20 billion project is not a small amount of money, but it is justified when considering the world's pressing scientific challenges. CERN has an excellent track record of developing technologies that can be applied to fields beyond high-energy physics, such as superconductor technology, aerospace tech, and medical applications like proton therapy. The operation of the Large Hadron Collider has also yielded knowledge that has helped make leaps and bounds in fields outside of particle physics.
If the Future Circular Collider becomes a reality, there is no reason to believe it won't do the same. The FCC will be operational by "2045 by the very earliest" if everything goes according to plan.
Courtesy: Indian Express
New Ocean discovered in our Solar System
A new analysis of observations by the Cassini spacecraft, which explored the Saturn system from 2004 to 2017, indicates that the rocking motion made by Saturn's moon Mimas as it orbits, known as Libration, is caused by a liquid ocean beneath its surface, rather than a completely solid core. This discovery adds to the handful of verified subsurface oceans in our solar system and raises the possibility that life could have evolved there as well.Mimas has been called the "Death Star" because a giant impact crater on one side makes it look like the space station from Star Wars; a crater on Earth of comparable size would be wider than Canada. One of the many moons surrounding Saturn—146 at last count—Mimas is unusual because it rocks heavily from side to side during its orbit around the planet.
![]() |
Image generated by Mufawad using AI |
Such libration could be explained in one of two ways: either Mimas had an extremely elongated core, shaped like a rugby ball, or it had a global ocean below its surface. A new paper by the same team, published in Nature, closely studied how the libration changes the orbit of Mimas and established that the moon indeed has a subsurface ocean.
The researchers suggest the ocean is kept from freezing by heat from tidal forces during the moon’s orbit around Saturn.
The hunt for extraterrestrial life has previously found clear signs of subsurface oceans on only two of Saturn’s moons—Enceladus and Titan—and on Jupiter’s moons Europa and Ganymede. As scientists speculate that where you find liquid water you often find life, so subsurface oceans are some of the best places to look for extraterrestrial life, which scientists speculate might have evolved around hydrothermal vents from the moons’ cores.
The latest research suggests that the ocean on Mimas must be young—between 2 and 25 million years old, which is almost no time at all in celestial terms. However, Lainey says the ocean on Mimas, which is relatively warm and may have ample supplies of raw chemicals, might be as good a place as any for it to have evolved.
The discovery also strengthens the idea that subsurface oceans might exist elsewhere in our solar system, particularly on several moons of Uranus and even on some Kuiper Belt objects, which circle the sun beyond Pluto.
Courtesy: National Geography
Unexpected alarming Measles outbreak in West
The UK Health Security Agency (UKHSA) has declared a national incident over rising measles cases, with over 300 cases in England since October 1, 2023. The decline in uptake of the measles, mumps and rubella (MMR) vaccine during the COVID-19 pandemic has spurred the spread of the disease across England and the rest of Europe, while small outbreaks have occurred in a handful of US states.Measles is highly contagious and is spread through coughing and sneezing, with symptoms including a fever, a runny nose, and an itchy rash of red-brown spots. Those most at risk include babies, young children, pregnant people, and those with a weakened immune system.
Low uptake of the measles vaccine is a key driver of the UK measles cases, as around 85% of children in England have received two MMR vaccine doses by five years old. This falls below the vaccination rate of at least 95% needed to achieve 'herd immunity', which substantially reduces disease spread.
The COVID-19 pandemic worsened matters, with social-distancing measures initially reducing measles cases but vaccine uptake also dropping, contributing to the latest surge. Anti-vaccine messaging during the pandemic caused some people to question vaccine safety, which might have delayed uptake.
To tackle the surge, the NHS launched a vaccination campaign on 22 January, urging millions of parents and carers to book vaccine appointments for their children.
In London, only 74% of children have received two doses of the vaccine, which are 97% effective against catching measles. One local council in the capital has launched a vaccine-awareness campaign in multiple languages to reach more people. Without further action, the outbreak could spread more widely across the United Kingdom, causing deaths.
In 2018, the UK had declared that the United Kingdom had eliminated the disease, defined as the absence of circulating measles. In response to the outbreak, Public Health England advised people to get the MMR vaccine.
Since December 1, there have been 23 confirmed measles cases in the United States as well, reflecting a rise in the number of measles cases globally. Europe is facing a more alarming situation, with a 45-fold rise in measles cases in the WHO's European region from 2022 to 2023.
Courtesy: The Nature
Quantum computing may adopt Qumodes instead of Qubits
Next-gen quantum computers could potentially rely on retooled laser beams, known as "qumodes," instead of qubits. Qumodes are information-carrying units of a harmonic oscillator, such as light, and contain an infinite number of states. Understanding and building upon this Qumode concept could make quantum computers even faster than they are today.![]() |
Image generated by Mufawad using AI |
In a new paper published in the journal Science, scientists from the University of Tokyo successfully harnessed photons to create quantum calculations. The team carefully modified laser pulses by removing one photon at a time and creating interference between pairs of pulses. This could allow for only a few Qumodes to achieve the same computing power as hundreds of qubits. This quantum computer from the University of Tokyo could successfully perform digital quantum computations and correct errors in real time.
The concept of Qumodes, also known as photonic quantum computing are known to scientists since 2000 and has been researched upon ever since. China has an advanced photonic system named Jiuzhang, which achieved quantum supremacy in 2020 and they have introduced Jiuzhang 3.0 in October 2023. Some startups in Europe are also working on photonic quantum computers, such as Quix Quantum in the Netherlands.
Courtesy: Popular Mechanics
With Apple launching vision pro headsets, Scientists concerned about the effects of VR, AR tech on our brain
Apple has released its Vision Pro mixed-reality headset, priced at $3,500, as a "spatial computer" alternative to standard laptops or desktops. The device is designed for all-day use and has already been used by enthusiastic early adopters for dozens of hours on end.However, many experts are skeptical that this kind of headset can replace physical monitors, keyboards, and mice. Some worry that using such a device for long periods could lead to motion sickness, new types of social isolation, or other unintended consequences.
![]() |
Image generated by Mufawad using AI |
Virtual reality (VR) and augmented reality (AR) are incredible tools for creating unique and immersive experiences, but they don't always prove to be useful tools.
Apple's 1.4-pound goggles use sensors, including a lidar scanner and a camera array, to place people into what's known as "mixed reality." Outward-facing cameras offer a real-time view of users' surroundings, while two small screens display an interactive digital realm.
Meta's Quest 3 headset, released in October 2023, also uses this style of "pass-through" video technology. Mixed reality is neither traditional VR, which completely blocks out the real world, nor AR, which presents a digital overlay on transparent lenses.
Instead, a pass-through device translates a digital representation of a person's environment into a completely virtual space. This means the device mediates everything about a user's experience.
Pass-through technology presents new risks, such as visual delays and other distortions, color saturation varies frequently, light dims, some objects appear too close or blurry, and the Vision Pro's image resolution is still lower than what human eyes are used to perceiving.
The researchers are advising against spending hours per day with these goggles on and recommend caution and restraint for companies lobbying for daily use of these headsets.
Rabindra Ratan, an associate professor of media and information at Michigan State University, suggests that we don't know what it means to walk around the world with reduced peripheral vision or visual distortions for hundreds of hours in a month. She suggests that there could be effects on the way your eyes move around in space, and maybe that could make your vision worse.
Researchers suggest that using such technologies can make basic motor tasks such as pushing elevator buttons, high-fiving, and navigating a crowd on foot much more challenging.
Already people are publicly operating moving vehicles, including cars, while wearing mixed-reality headsets. The US National Highway Traffic Safety Administration released a statement imploring people to not drive while wearing a VR device, in response to online clips of headset-wearing drivers. U.S. Transportation Secretary Pete Buttigieg also posted on social media, noting that all available consumer cars still require completely engaged human drivers.
Simulator sickness, a suite of uncomfortable symptoms that include nausea, headache, dizziness, and eye fatigue, is another concern of using such technologies. Enduring even low levels of simulator sickness could impact people's quality of life, activity level, and productivity, which is one reason researchers worry that people might try to rely on these devices for their day-to-day work.
Lastly, there are potential impacts on memory. In one 2014 experiment, Frank Steinicke, a professor of human-computer interaction at the University of Hamburg in Germany, spent 24 hours alternating between two-hour bouts of VR use and 10-minute breaks. The researcher found it difficult to realize was real and what wasn’t.
Immersive digital worlds may impact users' thinking and socialization, potentially influencing their work or learning. Studies have shown that people wearing augmented reality headsets perform better on simple cognitive tasks but worse on more challenging ones.
Additionally, people wearing AR devices feel less socially connected to those around them who haven't worn headsets. Wearing a virtual or mixed-reality headset is inherently isolating, making in-person collaboration more challenging.
Mark Roman Miller, an assistant professor at the Illinois Institute of Technology, warns that these devices carry enormous potential for counterproductive distraction. He treats his smartphone like his shoes and believes that augmented and mixed-reality devices could further exacerbate the problem of divided attention that many smartphone users already encounter.
Courtesy: Scientific American
First ever study carried on environmental costs of Artificial Intelligence
Recently, OpenAI CEO Sam Altman admitted that the AI industry is heading for an energy crisis, stating that the next wave of generative AI systems will consume much more energy than expected.It is said that Generative AI systems require both directly enormous amounts of fresh water to cool their processors, as well as indirectly to generate electricity to power the data centers.
![]() |
Image generated by Mufawad using AI |
In West Des Moines, Iowa, a giant data-centre cluster serves OpenAI's most advanced model, GPT-4. A lawsuit by local residents revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the district's water.
Google and Microsoft created their Bard and Bing large language models, consuming enormous water than usual. Both of them has an increase of 20% and 34% of water usage respectively, in one year, according to the companies’ environmental reports.
One research preprint suggests that, globally, the demand for water for AI could be half that of the United Kingdom by 2027.
Scientists warn that to limit AI's ecological impacts, pragmatic actions must be taken. The industry could prioritize using less energy, build more efficient models, and rethink how it designs and uses data centers.
The BigScience project in France demonstrated with its BLOOM model that it is possible to build a model of a similar size to OpenAI’s GPT-3 with a much lower carbon footprint. However, there is little incentive for companies to change.
On February 1 this year, US Democrats led by Senator Ed Markey of Massachusetts introduced the Artificial Intelligence Environmental Impacts Act of 2024. The bill directs the National Institute for Standards and Technology to collaborate with academia, industry, and civil society to establish standards for assessing AI’s environmental impact and to create a voluntary reporting framework for AI developers and operators.
However, To truly address the environmental impacts of AI, a multifaceted approach including the AI industry, researchers, and legislators is needed.
Courtesy: The Nature
The study demonstrated how recent algorithmic advances paired with one child's naturalistic experience has the potential to reshape our understanding of early language and concept acquisition.
The BigScience project in France demonstrated with its BLOOM model that it is possible to build a model of a similar size to OpenAI’s GPT-3 with a much lower carbon footprint. However, there is little incentive for companies to change.
On February 1 this year, US Democrats led by Senator Ed Markey of Massachusetts introduced the Artificial Intelligence Environmental Impacts Act of 2024. The bill directs the National Institute for Standards and Technology to collaborate with academia, industry, and civil society to establish standards for assessing AI’s environmental impact and to create a voluntary reporting framework for AI developers and operators.
However, To truly address the environmental impacts of AI, a multifaceted approach including the AI industry, researchers, and legislators is needed.
Courtesy: The Nature
New AI learning language through babies’ eyes
A team of New York University researchers conducted an experiment using headcam video recordings from a single child's six months to their second birthday. The researchers found that the neural network could learn a substantial number of words and concepts using limited slices of the child's waking hours, which was sufficient for genuine language learning.The study demonstrated how recent algorithmic advances paired with one child's naturalistic experience has the potential to reshape our understanding of early language and concept acquisition.
![]() |
Image generated by Mufawad using AI |
The researchers analyzed a child's learning process captured on first-person video using a light, head-mounted camera, using more than 60 hours of footage. The footage contained approximately a quarter of a million word instances linked with video frames of what the child saw when those words were spoken.
The researchers then trained a multimodal neural network with two separate modules: one that takes in single video frames (the vision encoder) and another that takes in the transcribed child-directed speech (the language encoder). These two encoders were combined and trained using an algorithm called contrastive learning, which aims to learn useful input features and their cross-modal associations.
After training the model, the researchers tested it using the same kinds of evaluations used to measure word learning in infants. The results showed that the model was able to learn a substantial number of the words and concepts present in the child's everyday experience. Furthermore, for some of the words the model learned, it could generalize them to very different visual instances than those seen at training, reflecting an aspect of generalization also seen in children when they are tested in the lab.
These findings suggest that this aspect of word learning is feasible from the kind of naturalistic data that children receive while using relatively generic learning mechanisms such as those found in neural networks. The work was supported by the U.S. Department of Defense’s DARPA and the National Science Foundation.
Courtesy: Science Daily
World’s first Genetically Modified Banana approved for consumption
Australian scientists have gained regulatory approval to release a genetically modified (GM) variety of Cavendish banana for human consumption. The QCAV-4 variety is the world's first genetically modified banana and will be the first GM fruit approved by the federal government for growing in Australia. While scientists say they will be safe to eat, the GM variety will be considered a "back-up option" in the fight against fungal Panama Tropical Race 4 diseases, as it is said to be nearly immune to this disease.
Panama TR4 is a fungal disease that starves banana trees of nutrients, eventually killing the plant. There is currently no treatment or cure, and infected areas can no longer grow most banana types, including the popular cavendish variety.
![]() |
Image generated by Mufawad using AI |
Professor James Dale, leader of the banana biotechnology programme at the Queensland University of Technology, welcomed this decision as it is an important step towards building a safety net for the world's Cavendish bananas from TR4, which has impacted many parts of the world already.
After more than seven years of field trials in the Northern Territory of Australia, QCAV-4 bananas will now be tested in Queensland paddocks. Professor Dale said his team would also focus on developing a gene-edited version of the QCAV-4 that was resistant to other diseases.
Courtesy: ABC.NET
Scientists gathering e-DNA across the world
In the late 1980s, Tamar Barkay who worked for EPA, US published her paper in the Journal of Microbiological Methods which outlined a method called "Direct Environmental DNA Extraction," which allowed researchers to take a census of the diversity and distribution of life in the environment around them.Unlike previous techniques, which could identify DNA from a single organism, eDNA collects the swirling cloud of other genetic material that surrounds it. The field of e-DNA has grown significantly in recent years, with its own journal and scientific society.
eDNA serves as a surveillance tool, offering researchers a means of detecting the seemingly undetectable. By sampling eDNA, or mixtures of genetic material, in water, soil, ice cores, cotton swabs, or practically any environment imaginable, it is now possible to search for a specific organism or assemble a snapshot of all the organisms in a given place. Instead of setting up a camera to see who crosses the beach at night, eDNA pulls that information out of footprints in the sand.
e-DNA is not failproof as it might not actually live in the location where the sample was collected. For instance, the organism detected in eDNA might not actually live in the location where the sample was collected. However, eDNA has the ability to help sleuth out genetic traces, some of which slough off in the environment, offering a thrilling and potentially chilling way to collect information about organisms, including humans, as they go about their everyday business.
The conceptual basis for eDNA dates back a hundred years, before the advent of molecular biology, and is often attributed to Edmond Locard, a French criminologist working in the early 20th century. In 1929, Locard proposed a principle: Every contact leaves a trace. In essence, eDNA brings Locard's principle to the 21st century.
In 2003, a study led by Eske Willerslev demonstrated the feasibility of detecting larger organisms with the technique, including plants and woolly mammoths. In the same study, sediment collected in a New Zealand cave revealed an extinct bird: the moa. These applications for studying ancient DNA stemmed from a prodigious amount of dung dropped on the ground hundreds of thousands of years ago.
The next evolutionary leap in eDNA's history took shape around the search for organisms currently living in earth's aquatic environments. In 2008, a headline appeared: “Water retains DNA memory of hidden species.” The study suggested an easier way to find brown-and-green bullfrogs, which are considered an invasive species in western Europe.
e-DNA is said to be a potential tool with limitless applications, especially as advances enable researchers to sequence and analyze larger quantities of genetic information. Scientists use eDNA to track creatures of all shapes and sizes, from tiny bits of invasive algae to sightless moles. Researchers can also sample entire communities by looking at eDNA found on wildflower blossoms or the eDNA blowing in the wind as a proxy for visiting birds and bees.
Environmental DNA (eDNA) has revolutionized species detection and monitoring, particularly in conservation biology. Researchers have developed a method that can detect rare and threatened species, such as amphibians, mammals, and dragonflies, even in low abundance in Europe.
However, the use of eDNA for detecting species is complex and controversial, as trace evidence can move around and the presence of certain DNA doesn't necessarily indicate a species' existence. Despite this, a handful of companies are starting to commercialize the technique.
Courtesy: Undark.com
Introducing Beef-Rice, a new food that can help overcome malnutrition
A new type food has been created by with Rice used as a scaffold to grow beef muscle and fat cells, creating an edible, "nutty" rice-beef combo that can be prepared in the same way as normal rice. The study, published in Matter, uses manufacturing methods similar to those for other cultured meat products, where animal cells are grown on a scaffold in a laboratory, bathed in a growth medium. The beef-rice has a higher fat and protein content than standard rice.![]() |
Image generated by Mufawad using AI |
A team of South Korean researchers hopes that the beef-rice will find use as a supplement for food-insecure communities or to feed troops, and will reduce the environmental impact of rearing cattle for beef. The need for alternative protein sources or making conventional livestock production more efficient is critical, and as of last year, only the United States and Singapore had approved the sale of lab-grown meat.
Co-author Sohyeon Park, says that the team tried to grow beef cells directly in the porous crevices of a grain of rice, but the cells didn't take well to the grain. Instead, they found that coating the rice in fish gelatin and the widely used food additive microbial “transglutaminase” improved cell attachment and growth.
After glazing uncooked rice grains with the gelatin-additive mix, the team seeded the grains with bovine muscle and fat cells, and the cells sat in the growth medium for around a week. After the culturing period, Park washed and steamed the beef-infused rice as she would conventional rice.
The nutritional content of the hybrid rice is different, with a 100-gram serving of the hybrid rice containing 0.01 grams more fat and 0.31 grams more protein, a 7% and 9% change respectively. The team estimates that 1 kilogram of the rice as it's made now would cost US$2.23, comparable with normal rice ($2.20 per kilogram) and much less than beef ($14.88 per kilogram).
If production can be scaled up and kept affordable, the hybrid rice could be a cheaper, more efficient source of nutrition than large pieces of lab-grown meat.
Courtesy: The Nature
Scientists working on self-fertilizing plants
Nitrogen is essential for life, making up 78% of the air we breathe and making up a significant portion of our food supply. However, nitrogen atoms typically come in pairs, making it difficult for our cells to separate them. Instead, we obtain nitrogen from plants or animals that eat plants. The only cells on Earth that can render nitrogen palatable for plants and animals are certain kinds of microbes, known as Nitrogen fixing bacteria, which "fix" nitrogen by using N2 to make NH3 (Ammonia). The survival of every plant and animal on Earth depends on the work of these bacteria, which must fix enough nitrogen to keep the biosphere's machinery running.For most of human history, these bacteria fixed enough nitrogen to keep up with the human appetite. However, this started to change about a century ago when William Crookes, the president of the British Association for the Advancement of Science, gave an alarming inaugural address. He noted that sodium-nitrate deposits in Chile, a major source of usable nitrogen for plants, would soon dwindle.
![]() |
Image generated by Mufawad using AI |
In 1909, German chemist Fritz Haber demonstrated a nascent but scalable method for turning N2 into ammonia. Carl Bosch, at the chemical-and-dye company B.A.S.F., industrialized Haber's method, and they each earned a Nobel Prize.
Today, the Haber-Bosch process produces roughly two hundred million tons of ammonia a year, and has allowed the human population to reach eight billion. About half of the nitrogen in your body comes from the Haber-Bosch process, but its costs are enormous. The reaction happens at approximately a thousand degrees Fahrenheit and three hundred times atmospheric pressure, using between one and two per cent of the world's energy. Meanwhile, fertilizer runoff pollutes the environment.
However recent advancements in biotechnology suggest a new possibility: we might be able to extract these bacteria's mechanisms and place them inside plants. Some crops, like legumes, act as hosts for diazotrophs, the bacteria which fix nitrogen from within the plant.
However, cereals, including wheat and rice, are dependent on eating nitrogen in the surrounding soil that has already been broken down by diazotrophs or by the Haber-Bosch process. Researchers are hoping to transfer genes from diazotrophs into cereals, giving them the power to fix nitrogen. We may someday have plants that can fertilize themselves.
Self-fertilizing plants have been a scientific goal since the 19th-seventies, with quick progress seen in 1972 when two British scientists induced E. coli, a bacterial species that does not normally fix nitrogen, to do so by importing genes from another bacterial species that does. But they failed to do so.
Vipula Shukla, a plant biologist and senior program officer for agriculture at the Bill & Melinda Gates Foundation, believes that success in self-fertilizing plants will require more than engineering plants. She points to the complementary roles of genetics, environment, and management—the GEM paradigm.
But why did not plants evolve to fix atmospheric nitrogen. One possible explanation for why plants didn't evolve to do nitrogen fixation themselves is that they haven't faced enough evolutionary pressure. The productivity of plants has fallen short of expectations since the advent of industrial agriculture.
Although there's no global food shortage today, local food crisis are created in part by problems with food distribution and storage. The fertilizer-supply chain is also a factor, with shocks like the pandemic, the wars, and inflation leading to widespread hunger.
Self-fertilizing crops could reduce agriculture's greenhouse impact by a third by reducing the production and transport of fertilizer and by allowing more crops to be grown locally.
Courtesy: New Yorker