Yohai Schweiger – www.israelhayom.com https://www.israelhayom.com israelhayom english website Tue, 16 Dec 2025 14:13:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://www.israelhayom.com/wp-content/uploads/2021/11/cropped-G_rTskDu_400x400-32x32.jpg Yohai Schweiger – www.israelhayom.com https://www.israelhayom.com 32 32 How Nvidia plans to teach AI to live in the real world https://www.israelhayom.com/2025/12/16/nvidia-ai-simulation-physical-intelligence-world-models/ https://www.israelhayom.com/2025/12/16/nvidia-ai-simulation-physical-intelligence-world-models/#respond Tue, 16 Dec 2025 09:00:29 +0000 https://www.israelhayom.com/?p=1110417 Nvidia's vice president of simulation technologies details how the company's Omniverse platform serves as a "cognitive kindergarten" where humanoid robots master real-world physics through thousands of virtual training scenarios, marking the foundation of the next AI revolution.

The post How Nvidia plans to teach AI to live in the real world appeared first on www.israelhayom.com.

]]>
Before a humanoid robot can open a door without breaking the key in the lock, lift a glass without shattering it, or cross a street without startling a driver, it needs to train extensively. Similarly, before a factory robot learns to react to a bolt falling from a conveyor or another robot suddenly slowing in the work path, it must experience these scenarios repeatedly – thousands of times in situations no one would want to test around humans.

The robot accomplishes all this in one place: the simulator. Nvidia's simulation world, Omniverse (the company's virtual environment platform), serves as the environment where robots are "born." It functions as a cognitive kindergarten where humanoid robots learn to walk, operate, understand, react, fall, and rise. Just as an infant develops cumulative motor and cognitive abilities, the robot learns within an artificial world governed by real-world physical laws.

The simulator generates thousands of situational variations: a glass falling at a different angle, a slightly higher step, weak lighting, a person crossing too quickly in the movement path – to teach the robot to react to as many scenarios as possible.

"If we want to build intelligence that understands the physical world and operates within it, we need to teach it in a world similar enough to reality so it can function within it safely, efficiently, and controllably," Rev Lebaredian, Nvidia's vice president of simulation technologies and Omniverse, said in an exclusive conversation with Israel Hayom.

Rev Lebaredian, Nvidia's vice president of simulation technologies and Omniverse (Photo: Nvidia)

A defining moment in the journey

Lebaredian joined Nvidia in 2002, after working in the film industry. Early in his career, he worked at production houses like Disney and Warner Bros., and later founded a startup developing advanced rendering technologies. In cinema, the rendering process transforms raw graphics into realistic images that appear as if filmed by a camera – a process that was particularly slow and demanding in the early 2000s, sometimes requiring hours of computation for each frame.

As part of his work, he contributed to creating effects in films like "Armageddon," "X-Men," "The Sum of All Fears," and Disney's "Mighty Joe Young," a film nominated for an Oscar for effects thanks to the digital gorilla character at the story's center.

In the early 2000s, Nvidia was primarily a gaming chip manufacturer, far from the AI giant it is today, valued at approximately $5 trillion. Lebaredian joined exactly when Nvidia's flagship product, the graphics processing unit (GPU), began transforming, and he accompanied the company from the crude computer games era of the early 2000s to today's AI revolution, changing the world at rapid speed.

"I joined Nvidia at a defining moment in its journey, precisely when we launched the ability to program shaders (programmable graphics functions) directly on the GPU. This significantly accelerated rendering capabilities, but more importantly, this was the moment the GPU opened for the first time to free programming. I worked then on the first programming language for graphics processors, CG, which became the first brick on the path to CUDA (Nvidia's parallel computing platform), the language dominating parallel computing today," he recounted.

Today, as head of the company's simulation division – Omniverse – Lebaredian is among the handful of senior executives leading simulation and physical intelligence at the company. Nvidia believes this field will drive the next major technological revolution, bringing artificial intelligence into the physical space of daily life. In this revolution, the division Lebaredian heads will have one of the most significant roles.

"Nvidia CEO and founder Jensen Huang said years ago that the most important algorithms will be those understanding the physical world and capable of influencing it," Lebaredian stated.

Nvidia CEO Jensen Huang listens as President Donald Trump speaks during the Saudi Investment Forum at the Kennedy Center, Wednesday, Nov. 19, 2025, in Washington (Photo: AP /Evan Vucci) AP

From language understanding to world understanding

Those algorithms Huang discussed years ago are materializing today in a new field of artificial intelligence: the world model. Just as a language model learns from billions of sentences to predict which word will come next with the highest probability, and thus essentially understand language, meaning, and context – a world model learns to predict what will happen next in the physical world. Namely, how an object will move, how force will affect, what will happen if a door opens too quickly, or where an object placed at this or that angle will roll.

"A world model is the central foundation of the next revolution: physical intelligence, meaning AI that understands not just words, but the universe," Lebaredian explained. According to him, this is a statistical model developing a probabilistic understanding of dynamic reality, not of text. This model will essentially be the robot's "brain," decoding the environment's visual information and knowing how to operate, where to turn to avoid an obstacle, and what force to apply to crack an egg while making an omelet, for example.

But to do this, it needs data of a type that doesn't exist on the internet. Not words, but material, movement, acceleration, friction, light, temperature, interactions, human environments, and physical infrastructures. The training process is fundamentally similar to that of language models – learning from countless examples and situations – except that here the examples must come from the physical world itself.

"The major problem with physical intelligence," Lebaredian explained, "is that we don't have a digital archive of physics. We need to capture it from reality – and that's expensive, dangerous, and limited. The solution is to recreate reality in simulation, and then produce synthetic data from it."

According to Lebaredian, Nvidia's simulation world is not merely a three-dimensional model. It is an engine of natural laws. A city where every lamppost, sidewalk, car, and tree branch is coded to behave as in reality. In this environment, a robot can walk thousands of simulated years in a short time, accumulating experience impossible in the real world.

The two covers of Time magazine's 2025 Person of the Year issue with an illustration by Peter Crowther (left) depicting Jensen Huang, President and CEO of Nvidia; Elon Musk, xAI; Dario Amodei, CEO of Anthropic; Lisa Su, CEO of AMD; Mark Zuckerberg, CEO of Meta; Demis Hassabis, CEO of DeepMind Technologies; Fei-Fei Li, Co-Director of Stanford University's Human-Centered AI Institute and CEO of World Labs; and Sam Altman, CEO of Open AI, and a painting by Jason Seiler (right) depicting the same people, in this undated handout combination image obtained by Reuters on December 11, 2025 (Photo: TIME Person of the Year/Reuters) via REUTERS

Releasing the "genie" from the GPU

To understand Nvidia's role in the AI revolution and the magnitude of the mission the company placed on Lebaredian's shoulders, one must return to the story's beginning – and trace the development of one of recent decades' most influential components: the graphics processing unit.

This development did not amount to gradual increases in performance. This is deep evolution, where each new GPU generation changed the computer's very nature. To such an extent that some believe that without Nvidia, not only would a large language model not function at the required speed, but we might not have imagined the very possibility.

Language models, world models, and advanced robotics all feed on enormous parallel computing power, the kind that needed to be born before theoretical thinking about them became possible. Twenty years ago, the GPU was a dedicated graphics unit designed to accelerate computer games. It was designed as a "drawing machine," receiving a series of fixed commands defining how a three-dimensional object should appear on screen. All stages were rigid: how light falls, how reflection forms, whether the material is shiny or matte. The processor could execute these tasks quickly, but nothing existed beyond this.

"In the early 2000s, everything was very simple and limited," Lebaredian recalled. "You couldn't write your own code. Performance was high, but flexibility didn't exist." According to him, the field's first significant revolution occurred when Nvidia opened the shading stage to programming. Instead of built-in models, developers could write their own functions, recreate light and material laws, and build graphic worlds as they imagined them. The change then appeared as a breakthrough for the gaming world alone, but in practice, it freed the GPU from its initial engineering constraints.

The drawing machine became a machine that understood somewhat more about how the world behaves. The hardware ceased being a black box and became an open platform. This was the moment the seed was planted that later became a computing superplatform.

"I've been at Nvidia for 23 years," Lebaredian said, "and almost throughout this entire period, the company has dealt with the question of what else the GPU can be beyond what it was designed for."

"Far beyond what we imagined"

Lebaredian recounted that as shader programs became more flexible, more and more developers identified potential within the GPU far exceeding graphics. Thus, for example, academic researchers began using the graphics processor for physics calculations – they took the same shading function that calculates light and adapted it to compute airflow, water movement, or particle dynamics. The graphics processor's essence as a computer with powerful parallel computing capabilities gradually became clear.

"We saw researchers using it for things completely unrelated to graphics – physical simulations, fluid dynamics, molecules. This was the moment we understood our processors could serve far beyond what we imagined," he stated.

At this stage, Nvidia understood it must change direction and give this computing body a new form. In 2006, CUDA (Nvidia's parallel computing platform) launched, a software environment allowing regular code to run on the GPU. No more disguising scientific problems as graphics, no more manipulating textures or pixels – but a complete computer capable of processing large arrays, running loops, and executing complex algorithms quickly. Historically, this was the turning point at which the GPU ceased to be a graphics accelerator and became a general-purpose computing engine.

The network that learned to "see"

Here arrived another defining moment in the development of artificial intelligence, made possible by Nvidia's programming language. AlexNet – that groundbreaking 2012 neural network learning to identify objects in images with high accuracy like cats, dogs, cars – ran on CUDA. AlexNet marked the beginning of the past decade's computer vision era, with countless applications from smart security cameras to facial recognition systems in smartphones. That same processor, previously drawing shadows, became a machine learning model to identify complex patterns – learning to "see."

Here, it became clear how critical this link was. Those telling AI's history usually emphasize algorithmics but almost always ignore the fact that behind all this stood infrastructure that realized the vision: parallel computation of enormous data quantities at speeds and prices that enabled the very idea of large models.

In a sense, had the GPU not first freed itself from its graphic constraints, we might not have been able to think about a language model as a feasible project. In retrospect, the GPU appears to have undergone the most dramatic transformation chain in computing history: from drawing machine to scientific computer, from graphics accelerator to global AI engine, and from imaging system to virtual reality source, raising the next generation's robots.

Nvidia did not merely improve the GPU. It reinvented it repeatedly until it became the foundation supporting today's entire artificial intelligence revolution – and likely will be tomorrow's as well. "We are only at the beginning of the process of creating foundational world models. No one will 'own' them or be their exclusive owner – this is a project all humanity will need to contribute to," Lebaredian concluded.

The post How Nvidia plans to teach AI to live in the real world appeared first on www.israelhayom.com.

]]>
https://www.israelhayom.com/2025/12/16/nvidia-ai-simulation-physical-intelligence-world-models/feed/
Israel develops 'ultimate solution' to drone threat https://www.israelhayom.com/2025/10/28/skylock-anti-drone-technology-european-defense-systems/ https://www.israelhayom.com/2025/10/28/skylock-anti-drone-technology-european-defense-systems/#respond Tue, 28 Oct 2025 09:00:36 +0000 https://www.israelhayom.com/?p=1098333 Skylock CEO Baruch Dilion reveals how Israeli anti-drone technology is addressing Europe's unprecedented aerial security crisis with multi-layered defense systems combining lasers, electronic warfare, and drone-versus-drone interception.

The post Israel develops 'ultimate solution' to drone threat appeared first on www.israelhayom.com.

]]>
Over the past decade, drones have transformed from a beloved photography accessory into an inexpensive, accessible, and lethal attack tool. They hover low, silently, carrying cameras, explosives, or jamming equipment, and can cause devastating damage to adversaries.

The battles in Ukraine and Russia, alongside Israel's multi-arena warfare, have underscored the pivotal role of drones and unmanned aerial vehicles on the battlefield, sparking a surge of demand in the defense industry for anti-drone solutions.

Numerous companies produce anti-drone systems that detect and intercept drones using diverse techniques, including communication and GPS jamming, RF detection, video processing, and the integration of optical sensors and radars. Yet, each time a response to the threat is created, a new generation of drones emerges that circumvents it. This is an ongoing cat-and-mouse game in which what worked yesterday may not apply today.

Skylock Skydefender System (Photo: Skylock)

At first, drones were controlled through a fixed frequency and could be intercepted relatively easily via frequency jamming. Very rapidly, drones adopted "frequency hopping" techniques, enabling them to evade jamming systems. In response, defense systems were compelled to develop smarter scanners powered by machine learning.

When drones began relying on GPS, satellite-signal spoofing methods were developed, causing them to lose orientation. Later, drones appeared that navigate using internal sensors and computer vision, technologies that render them resistant to radio jamming. Then arrived the swarms, posing the quantitative challenge of intercepting dozens of targets at once. Each time defense closes a gap, offense generates a new way to circumvent it, and this occurs at a dizzying rate. Companies working in this field cannot afford to become complacent. They must consistently update, develop, and tailor solutions at the pace of the threat.

Skylock Skydefender System (Photo: Skylock)

"The drone threat never stops growing and developing, literally day by day. This requires us to always be one step ahead, providing solutions not only to current threats but also to those that will emerge in the future," said Baruch Dilion, CEO of Israeli company Autonomous Guard, which develops drone detection and interception systems.

Dilion spoke to Israel Hayom precisely as his company was demonstrating the interception system for representatives of a European nation. "The Europeans are in total hysteria. They understand this is real. They're accelerating long-term programs and, in the short term, building point-defense plans. We're participating in several such projects," he added.

Transitions to aerial symmetry

Skylock, a subsidiary of Autonomous Guard, presents a multi-layered defense philosophy. The detection layer relies on a blend of RF, optical, acoustic, and radar capabilities, along with information fusion, to identify as many types of aerial threats as possible under varying conditions.

Following identification, the interception operation relies on an array of electronic warfare and jamming capabilities, as well as kinetic measures. Now, a laser is also joining the arsenal of interception capabilities.

Skylock possesses two types of lasers: one for a range of hundreds of meters and another for up to 1.5 kilometers (0.9 miles). "Laser is a fantastic solution," Dilion said, "but it's point-based and the range is limited. It's excellent when drones move visually and don't rely on an external link, but the inability to address a swarm of drones is a real limitation. When ten targets arrive simultaneously, you need something else."

Here is where the next vision comes into play: drone-versus-drone. Rather than launching expensive interceptors from ground or air sources, the concept is to achieve a symmetrical response in quantitative terms: launching many relatively inexpensive interceptor drones against a drone attack.

"To deal with swarms, you must respond with quantity," Dilion explained. "An Iron Dome interceptor costs a lot. An interceptor drone costs a few thousand dollars. If I detect five drones, I launch five. If I detect 50, I launch 50."

Skylock is not the first company trying to embrace a drone-versus-drone interception approach, but the practical execution is more intricate than it sounds. The primary problem is the "last mile" – roughly the final 100 meters where the target maneuvers close to the ground and terrain conditions are especially challenging. "Quite a few companies are claiming to offer such a solution; it may work well in controlled demonstrations, but not in operational conditions," Dilion said. "In the system we're developing, the interceptor drone locks onto the target at close range using optical sensing, and from that point it locks on and crashes into the target. This is the ultimate solution."

Baruch Dilion, CEO of Israeli company Autonomous Guard (Photo: Skylock)

Europe fortifies

In Europe, the war in Ukraine has accelerated a deep process of change: countries are moving from point defense of critical facilities – bases, airports, power stations – to regional defense, with the long-term goal of establishing a unified spatial network that synchronizes all monitoring and detection areas. "In the end, everyone will understand that one monitoring network and full information synchronization are necessary. But until that happens, point solutions are needed now," Dilion said.

Skylock belongs to parent company Autonomous Guard, which also owns BeeSense, a company creating ground, aerial, and maritime surveillance systems. The connection between the two permits Autonomous Guard to present a multi-layered defense envelope, from information collection to interception. "The synergy between BeeSense and Skylock enables providing a complete defense solution – ground, aerial, maritime – from one company, because the systems are integrated," Dilion explained. "For example, BeeSense detection systems are integrated as detection means for drones in Skylock's system." According to Dilion, the operational benefit of connecting the two companies is the capability to adjust defense to the required scale: from a single facility to an entire border network. "Our systems can interface even for the defense of a long border."

According to Dilion, the challenge today is not only identifying the threat but also dealing with the volume of information. "It's possible that even the amount of information that preceded October 7 was an inhibiting factor in prevention. Information fusion and cross-referencing between systems allow the user to distill what they really need to make the right decision."

The post Israel develops 'ultimate solution' to drone threat appeared first on www.israelhayom.com.

]]>
https://www.israelhayom.com/2025/10/28/skylock-anti-drone-technology-european-defense-systems/feed/
'When our robot takes a bullet, it saves a soldier's life' https://www.israelhayom.com/2025/07/09/when-our-robot-takes-a-bullet-it-saves-a-soldiers-life/ https://www.israelhayom.com/2025/07/09/when-our-robot-takes-a-bullet-it-saves-a-soldiers-life/#respond Wed, 09 Jul 2025 06:00:01 +0000 https://www.israelhayom.com/?p=1071805 The Gaza campaign has distinguished itself from all previous conflicts as the first battlefield where robots, drones, sensors, and artificial intelligence have fought alongside human soldiers. Military experts widely acknowledge that the unprecedented integration of these cutting-edge solutions during active combat has fundamentally transformed warfare dynamics, denying enemy forces critical advantages while preserving soldiers' lives […]

The post 'When our robot takes a bullet, it saves a soldier's life' appeared first on www.israelhayom.com.

]]>
The Gaza campaign has distinguished itself from all previous conflicts as the first battlefield where robots, drones, sensors, and artificial intelligence have fought alongside human soldiers. Military experts widely acknowledge that the unprecedented integration of these cutting-edge solutions during active combat has fundamentally transformed warfare dynamics, denying enemy forces critical advantages while preserving soldiers' lives and contributing significantly to operational success. Beyond established defense contractors, the conflict has revealed an extensive ecosystem of defense technology startups whose innovations have been deployed at scale across the combat zone.

Roboteam stands among these pioneering companies, specializing in tactical robots designed for reconnaissance and combat engineering operations. The firm's MTGR robot has seen extensive deployment throughout Gaza Strip operations, conducting missions inside Hamas tunnel systems while detecting concealed openings, explosive traps, and improvised devices, and establishing communication networks in hostile territory.

Company founder and CEO Yossi Wolf, speaking with Israel Hayom, characterizes the technological transformation as revolutionary. "Previously, unmanned systems for air and ground operations represented unfulfilled potential. While pilot programs existed, these platforms never achieved the operational scale witnessed in this conflict. Gaza represents a globally unprecedented campaign where robotics and AI became fundamental combat elements," Wolf explained.

The MTGR robot during operations with the Israeli Defense Force (Photo: Roboteam)

Wolf emphasizes the technological dimension's decisive impact on battlefield outcomes. "These capabilities have proven to be definitive game-changers in asymmetric warfare scenarios. We're deploying systems that never experience fatigue, continuously generate vast data streams, and maintain near-absolute spatial awareness. These capabilities have eliminated safe havens for enemy forces," he noted.

From Afghanistan to Gaza

Israeli Air Force veterans Yossi Wolf and Elad Levi established Roboteam in 2010 with the mission of developing tactical robots capable of replacing frontline soldiers while minimizing human casualties during reconnaissance and combat engineering operations.

The company's primary platform, the MTGR tactical ground robot, weighs 13 kilograms (29 pounds) and maintains soldier-portable dimensions. Engineers designed the system to navigate challenging environments, including multi-story buildings, staircases, drainage systems, and underground tunnels. The robot features a manipulator arm capable of explosive ordnance disposal, door breaching, and sample collection while providing real-time intelligence before combat forces enter hazardous areas.

The company's 2010 founding coincided with the deadliest year for American forces in Afghanistan, where approximately 371 soldiers died from improvised explosive devices while countless others sustained severe injuries. Afghanistan became the MTGR's inaugural operational deployment. The platform rapidly entered US Army service, with Roboteam securing designation as the primary tactical ground robot supplier for the US Marine Corps. The company has delivered more than 1,200 units to date, generating approximately $150 million in total revenue.

"The MTGR robot has seen extensive deployment throughout Gaza Strip operations" (Photo: Roboteam)

Wolf describes the Afghan experience as foundational. "Afghan insurgents mastered sophisticated trap placement across road junctions, drainage channels, vertical shafts, and tunnel networks, inflicting substantial American casualties. Our Gaza deployment was built upon Afghanistan's operational lessons. The US Army leads global military robotics evaluation, standardization, and adoption processes. American military pilot programs typically involve substantial procurement volumes, enabling companies like ours to achieve commercial viability and critical operational mass," Wolf observed.

"Gaza's tactical robot requirements originated from field commanders, but Defense Ministry leadership eliminated bureaucratic obstacles and facilitated rapid solution integration within combat units during active operations. This created extraordinary synergy between operational forces and defense technology providers," he added.

From Fortnite to the battlefield

Operational simplicity represents another factor accelerating solution adoption during current combat operations. These robotic systems demonstrate extremely high autonomous operation levels, earning the designation as "automatically guided robots" (AGR). Operators need not manually control or navigate these platforms, instead issuing mission-specific directives. Self-recovery capabilities enable automatic repositioning following vehicle rollovers, while advanced environmental awareness and spatial comprehension systems maintain operational effectiveness. User interfaces deliberately mirror civilian gaming console designs.

The founder and CEO of Israeli defense company Roboteam, Yossi Wolf (Photo: Roboteam)

Wolf details the civilian technology integration approach. "These platforms incorporate commercial sector innovations. Our robot operates through gaming console-style control stations. The system performs most functions independently while presenting comprehensive information through intuitive user displays. Younger soldiers arrive with existing gaming familiarity and adopt these tools instinctively, requiring minimal formal training," he explained.

The AI era and robot fleets

Roboteam has expanded its product portfolio in recent years, developing five additional robot variants across different size categories for specialized mission requirements. "This represents an exceptionally demanding technical challenge. Robots must function under extreme physical conditions while remaining operator-friendly under combat stress. Performance expectations are extraordinarily high with zero tolerance for system failures. However, when our robot takes a bullet or explodes, it's a good feeling because it saved a soldier's life," Wolf stated.

Wolf predicts artificial intelligence will exponentially expand these systems' capabilities while completely restructuring future battlefield dynamics. "We're implementing AI across all operational levels, from individual robot platforms through user interfaces to data processing systems. AI enables management of significantly increased complexity levels. Operators can conduct natural language conversations with robots, issue verbal commands, and receive spoken responses," he described.

Wolf envisions future battlefields populated by substantially more robots and correspondingly fewer human soldiers. "The next evolutionary phase involves coordinated robot fleet operations, with multiple platforms collaborating on unified missions including area security, personnel screening, and similar tasks. Similar to civilian applications, AI enables expanded mission capabilities. This will permit reduced soldier deployments while minimizing human exposure to combat risks," he concluded.

The post 'When our robot takes a bullet, it saves a soldier's life' appeared first on www.israelhayom.com.

]]>
https://www.israelhayom.com/2025/07/09/when-our-robot-takes-a-bullet-it-saves-a-soldiers-life/feed/
Huawei scrambles to keep China in the AI race https://www.israelhayom.com/2025/05/08/huaweis-scrambles-to-keep-china-in-the-ai-race/ https://www.israelhayom.com/2025/05/08/huaweis-scrambles-to-keep-china-in-the-ai-race/#respond Thu, 08 May 2025 04:15:14 +0000 https://www.israelhayom.com/?p=1056229 After the Trump administration recently banned, as part of the escalating trade war between the superpowers, Nvidia from continuing to export any type of AI chips for data centers to China, a local player is trying to fill the void and keep China strong in the intensifying artificial intelligence race. According to various reports, Huawei […]

The post Huawei scrambles to keep China in the AI race appeared first on www.israelhayom.com.

]]>
After the Trump administration recently banned, as part of the escalating trade war between the superpowers, Nvidia from continuing to export any type of AI chips for data centers to China, a local player is trying to fill the void and keep China strong in the intensifying artificial intelligence race. According to various reports, Huawei is currently accelerating the production of a new AI chip, Ascend 910C, designed to serve as an alternative to Nvidia's powerful chips.

Nvidia CEO Jensen Huang holds a Grace Blackwell NVLink72 as he delivers a keynote address at the Consumer Electronics Show (CES) in Las Vegas, Nevada on January 6, 2025 (AFP / Patrick T. Fallon) AFP / Patrick T. Fallon

Huawei has already begun supplying samples of the new chip to local cloud providers and AI companies like Alibaba and Deepseek, and aims to produce hundreds of thousands of units of the new chip this year. Huawei's effort to provide an alternative to Nvidia is not just a matter of commercial competition in the AI infrastructure market, but an issue of national-strategic importance for China in its pursuit of technological independence and in its struggle against the United States for AI supremacy. Some are asking whether the American restrictions not only failed to constrain China's progress but actually transformed it into an equal superpower.

Huawei has not officially and openly announced the new chip, and reports about its performance are somewhat contradictory. According to some sources, apparently coming from within Huawei, the new chip provides processing and memory capabilities equivalent to Nvidia's previous generation chip, the H100. This is a dramatic claim, but should be taken with a grain of salt. According to a recent report in the technology blog Tom's Hardware, researchers from Chinese AI company Deepseek recently examined the chip and found it lacking in model training capabilities, running AI models "at a rate of 60% compared to Nvidia's H100."

The engineering structure of the chip also reveals Huawei's development challenges. The 910C is a kind of "hybrid creation" that connects two previous generation 910B chips in a special package. In other words, unlike Nvidia, which offers a new breakthrough with each new generation, the 910C is a kind of "improvisation."

Another question mark concerns Huawei's ability to produce the chip in quantities that will satisfy the enormous thirst among Chinese cloud and AI companies, now that Nvidia's supply pipeline to the Chinese market is blocked. Manufacturing such advanced chips requires an entire ecosystem of innovative technologies, and in recent years the tightening American embargo has made it increasingly difficult for China to gain access to this essential equipment.

Huawei Atlas 800 inference server is displayed at InnoEX Fair, in Hong Kong, China April 15, 2025 (Reuters / Tyrone Siu)

China's vulnerability lies in two critical links in the chip manufacturing process: American restrictions block China's access to two suppliers, Taiwan's TSMC and the Netherlands' ASML, without which advanced AI chips like Nvidia's cannot be produced. ASML is the only company in the world that supplies EUV lithography machines. These machines, using advanced optics, enable "carving" tiny transistors as small as 7 nanometers and below on silicon wafers. ASML's CEO recently estimated that "denying these machines to China will perpetuate a 10-15 year lag behind the West."

Chinese manufacturers are also failing to close the gap with TSMC. For illustration, TSMC is expected to begin production of 3-nanometer transistors this year, and even unveiled 1.4-nanometer production about a week ago, to be launched in 2028. In contrast, China's SMIC, which manufactures Huawei's chips, is still "stuck" at 7 nanometers, which is the process Nvidia used to manufacture its AI chip in 2020.

The jump from 7 nanometers and below is complex, and China will struggle to accomplish it without access to equipment suppliers like ASML and testing equipment manufacturers like Israel's Nova. The smaller the transistors, the more efficient and powerful chips can be developed and manufactured. This means that by the end of the decade, Nvidia's chips could be thousands of times more powerful than Chinese alternatives, which would decide the AI battle in favor of the United States.

Chinese President Xi Jinping attends an event at the Great Hall of the People in Beijing on Friday, March 28, 2025. (AP Photo/Ng Han Guan)

At a recent meeting between China's ruler, Xi Jinping, and Huawei founder Ren Zhengfei, the latter tried to convey a reassuring message, claiming that the Chinese chip industry's dependence on the West is decreasing, and that Huawei is promoting cooperation with thousands of Chinese companies to achieve by 2028 about 75% independence in the supply chain for chip development and production. Indeed, satellite images reveal that Huawei is establishing three new advanced manufacturing facilities in China.

China's scientific community is also trying to contribute its part, and recently two groundbreaking studies from Chinese universities were published in the journal Nature in the field, one dealing with the development of ultra-fast "flash" memory that can accelerate AI processing in data centers, and the other on an efficient and fast transistor not based on silicon. If these discoveries mature commercially, it could advance China a big step forward in its technological independence.

Last week, Nvidia founder Jensen Huang said that contrary to popular belief, China is not lagging behind the Americans in the field of AI at all. Indeed, about half of the world's AI researchers today are Chinese, and in 2024 Chinese companies and researchers registered more than 13,000 patents in the AI field, more than twice as many as their American counterparts. However, American patents are cited 7 times more often and American AI companies launched more than 40 new foundation models in 2024, compared to 15 models launched in the Chinese ecosystem.

In addition to its research investment, China also does not hesitate to use means to circumvent American sanctions, and according to Western claims, uses shell companies, smuggling, and exploitation of lax enforcement. For example, an advanced processor manufactured by TSMC was recently discovered in a Huawei chip. The processor reached Huawei through a third company that purchased it from TSMC and transferred it, in violation of export restrictions, to Huawei. Additionally, The Wall Street Journal recently reported that smuggling networks are successfully smuggling Nvidia's new "Blackwell" processor into China.

These loopholes in regulations and enforcement have only led the American administration to tighten restrictions. Just days before the end of his term, the Biden administration issued very strict regulations limiting the ability of American technology companies to sell advanced technologies to a very wide range of countries, including Israel, to prevent these technologies from leaking to China. These regulations drew much criticism from the chip industry, including from Nvidia, claiming they would make it difficult for American technology companies to maintain proper trade relations with many countries around the world. Today it was reported that the Trump administration is expected to cancel these regulations and offer an alternative framework, and time will tell if it succeeds in addressing the phenomenon without further burdening American companies.

The defining event that made it clear to everyone that China is breathing down America's neck regarding AI was, of course, the appearance of Chinese AI company Deepseek. In January, on Trump's inauguration day, Deepseek, which until then was completely unknown, launched a thinking model, R1, that was equivalent in performance to the thinking models of American AI companies like OpenAI and Google – and it managed to do so using vastly reduced processing resources.

Deepseek's surprising appearance on the AI stage prompted two types of responses. On one hand, some were quick to dismiss Deepseek as a product of intellectual property theft and communist propaganda, while on the other hand, some rushed to declare that American sanctions had failed and only spurred China to become an AI superpower. It seems that both claims are lacking. Although Deepseek used controversial methods, its algorithms offer significant improvement in the efficiency of training and running models, and indeed today AI companies around the world are using the same methods to optimize their models.

However, contrary to the prevailing assessment that Deepseek's new efficiency methods would reduce the demand for processors, they have only increased it. The reason is simple: the ability to develop and run AI models with fewer processing resources just makes AI technology more available and accessible to more players, not just technology giants with billion-dollar pockets. Indeed, Deepseek CEO Liang Wenfeng said in an interview with local media, "Money has never been a problem. The ban on technology exports is the problem."

The post Huawei scrambles to keep China in the AI race appeared first on www.israelhayom.com.

]]>
https://www.israelhayom.com/2025/05/08/huaweis-scrambles-to-keep-china-in-the-ai-race/feed/
Undetectable AI fakes could determine US election https://www.israelhayom.com/2024/10/13/ai-fakes-impossible-to-identify-in-real-time-could-sway-us-voters/ https://www.israelhayom.com/2024/10/13/ai-fakes-impossible-to-identify-in-real-time-could-sway-us-voters/#respond Sun, 13 Oct 2024 01:30:34 +0000 https://www.israelhayom.com/?p=1003607   As the United States prepares for its first presidential election in the age of generative AI, fears are growing about the potential impact of deepfakes and AI-generated content on voter perceptions. Recent incidents involving fabricated images of candidates and foreign disinformation efforts have underscored the challenges to electoral integrity in this new technological landscape. […]

The post Undetectable AI fakes could determine US election appeared first on www.israelhayom.com.

]]>
 

As the United States prepares for its first presidential election in the age of generative AI, fears are growing about the potential impact of deepfakes and AI-generated content on voter perceptions. Recent incidents involving fabricated images of candidates and foreign disinformation efforts have underscored the challenges to electoral integrity in this new technological landscape.

The devastation wrought by Hurricane Helena in the southeastern United States two weeks ago left a trail of haunting images, but two pictures are likely to linger in the public consciousness more than any others. One depicted former President and Republican candidate Donald Trump in the disaster zone, standing knee-deep in floodwaters alongside rescue workers.

The other showed a small, weeping girl alone in a fragile wooden boat, clutching a tiny puppy. For many in the affected areas, the stark contrast between these images reinforced a sense that the current administration had forsaken them. Trump's picture was widely shared with the caption "hero," while the girl's image was accompanied by comments like "The administration has let us down again." There was just one snag with these powerful images: both were complete fabrications churned out by a rudimentary AI generator.

This marks the first US election unfolding in the era of generative AI (GenAI). Text and image generators like ChatGPT and Midjourney produce content on demand, setting them apart from any previous forgery technology. They can create images that challenge human perception and are accessible to anyone with an internet connection.

The list of AI-related electoral incidents is already growing. In August, Trump shared a series of images showing Taylor Swift fans wearing "Swifties for Trump" shirts, unaware they were AI-generated. This may have prompted the pop star to publicly back his rival, Harris. In an apparently unrelated development, at least one genuine image of a "Swiftie" supporting the Republican candidate surfaced after the incident. Later, Trump shared a photo purporting to show that the crowds at Harris's campaign rallies were "created using AI." An independent fact-check revealed the photo was, in fact, authentic.

Conversely, allegations of AI manipulation have become a convenient excuse for some politicians. Georgia's lieutenant governor, Mark Robinson, attempted to dismiss an exposé of his past controversial statements by claiming it was "AI forgery." Ironically, this led to the broadcast of a campaign ad against Robinson that was itself entirely generated by AI – a first in political advertising.

Is there a technological fix for these forgeries? Israeli firm Revealense has developed AI-powered technology to detect hidden emotions in videos, which can also identify deepfakes. However, Amit Cohen, a VP at the company, tells Israel Hayom that the battle may already be lost when it comes to AI-generated still images. "Given their quality, there's no technological capability to identify a fake image in real-time based on pixel analysis," he explains. "The real challenge lies in videos and deepfakes, which can cause significant damage during sensitive periods like elections. Currently, this capability is primarily in the hands of state actors."

Indeed, US intelligence agencies have sounded the alarm that Russia, Iran, and China will leverage GenAI to undermine electoral integrity. The Cybersecurity and Infrastructure Security Agency (CISA) has also advised avoiding AI-related scams before and during Election Day.

A month ago, Microsoft unveiled evidence that Russian trolls linked to the Kremlin had disseminated two deepfake videos, garnering millions of views, aimed at undermining Harris's campaign. This came even as Russian President Vladimir Putin publicly expressed a preference for the Democratic candidate. One video featured a young woman in a wheelchair recounting a hit-and-run accident allegedly involving Harris in 2011. Fact-checkers discovered that the accident report came from a non-existent TV station, whose website was hastily created just before the fake video's distribution. The supposed victim was revealed to be an actress who was paid for the performance. "Russian actors will ramp up their efforts to spread divisive political content, staged videos, and AI propaganda," Microsoft cautioned.

UK Artificial Intelligence Safety Summit at Bletchley Park on November 2, 2023 in Bletchley, England (Photo: Leon Neal/Getty Images) Getty Images

Chinese operatives are also distributing fabricated video content, aiming to sow division and erode trust in the democratic process. Microsoft's cybersecurity team identified a Beijing-linked hacker group that disseminated anti-Biden administration and anti-Harris campaign videos before vanishing from the web. Groups associated with China are spreading content designed to damage both political camps, masquerading as Trump supporters and progressive organizations alike.

Ultimately, it's unclear whether AI-generated content will significantly sway voter decisions. Mainstream media outlets across the political spectrum have largely refrained from amplifying these fakes. On social media platforms, there are typically enough savvy users to flag suspicious images and neutralize their impact. Nevertheless, in an era of ubiquitous networks and sophisticated fakes, vigilance is paramount. "My advice is to always approach images on social networks with skepticism and verify the source," Cohen concludes.

The post Undetectable AI fakes could determine US election appeared first on www.israelhayom.com.

]]>
https://www.israelhayom.com/2024/10/13/ai-fakes-impossible-to-identify-in-real-time-could-sway-us-voters/feed/
New cellphone feature could help you find products hiding on store shelves https://www.israelhayom.com/2024/02/29/new-cellphone-feature-could-help-you-find-products-hiding-on-store-shelves/ https://www.israelhayom.com/2024/02/29/new-cellphone-feature-could-help-you-find-products-hiding-on-store-shelves/#respond Thu, 29 Feb 2024 15:03:28 +0000 https://www.israelhayom.com/?p=939379   We are all too familiar with this phenomenon: We stand in the store, looking at the shelf in front of us, packed with products, and our eyes jump back and forth… "Where is the curly hair shampoo?" "Which products are gluten-free?". Well, Super-Pharm has just launched a new feature that assists customers at the […]

The post New cellphone feature could help you find products hiding on store shelves appeared first on www.israelhayom.com.

]]>
 

We are all too familiar with this phenomenon: We stand in the store, looking at the shelf in front of us, packed with products, and our eyes jump back and forth… "Where is the curly hair shampoo?" "Which products are gluten-free?". Well, Super-Pharm has just launched a new feature that assists customers at the branch in finding the desired product on the shelves through "Augmented Reality" (AR).

Follow Israel Hayom on Facebook, Twitter, and Instagram

Across the chain's branches, on the shelves and next to the products, you can scan a QR code, which will automatically activate (without registration) the AR feature. Now, you can browse around the store, scan the shelves using your smartphone camera and receive relevant information and guidance to help you find what you are looking for through AR layers integrated with the products, sort of a "personal shopper."

For example, customers can specify that they are looking for gluten-free baby food, and the relevant products will appear on the screen, marked in blue. Patrons can also search the shelves for products designed for sensitive skin or curly hair, which will appear on their screen with their unique mark. In addition to the "quick search", the platform will also allow you to get answers to frequently asked questions (FAQ) when scanning the product, such as "What ages is it suitable for?", "Is it suitable for pregnant women?" and recommendations for personal customization. 

In recent years, AR applications have gradually begun penetrating the retail arena, with most designed to harness technology to allow consumers to experiment with the product, like IKEA's AR App, which will enable you to scan your living room and place furniture pieces to see if they fit in size and design, or the Apps launched by Nike or Sephora, which allow you to virtually try on shoes or makeup.

Super-Pharm's feature is one of the first to bring AR to the physical in-store shopping experience. In a conversation with Israel Hayom, Michael Mitrani, Super-Pharm Online's VP of e-commerce and Marketing, explained: "We are not looking for gimmicks, but innovation that will offer value to consumers and become an inseparable part of their shopping experience at our branches. AR can be harnessed for a variety of uses. Our goal was to provide those customers who stand in front of any shelf with useful information about the products, which does not always appear on their packages, and answers to Frequently Asked Questions. We have also brought the world of filters from the online world to the physical store."

Super-Pharm does not produce the content itself, but rather the brands that sell their products in the chain's branches. AR unlocks a new channel for these brands, by generating personal and interactive communication with the customer at the point of sale. Mitrani continued: "AR takes the static, dry information displayed on the shelves and products and reinvents it through video, audio and interactivity. In this sense, we are providing brands with a new media platform in the retail space. These types of Apps can be an additional source of revenue for the retail chain. However, we try to avoid ads because it does not give added value to patrons and may deter them from using this feature."

Among the brands already participating in the venture are Materna, Weleda, ORAL B, Haggis, Tresemme, Simple, Durex, Nurofen, Altman, Dove and others.

The AR that connects the physical store and the virtual one

The company behind the development of Super-Pharm's new feature is Israeli startup weR, founded and managed by brothers Tomer and Amit Chachek. weR has developed an AR platform dedicated to the retail world, which enables the integration of dynamic and three-dimensional AR content as part of the shopping experience in the physical store.

Based on machine learning and machine vision technologies, the company's AR engine can identify the product on the shelf and display the visual content in 3D according to the user's movement and perspective.

According to weR AR CEO Amit Chachek, the company's platform makes it possible to connect the physical store to the digital space." Our platform brings features from the virtual arena to the physical products in the store, like coupons and filters, personalization and digital purchases."

weR has already proven its technology in collaboration with one of the world's largest retail chains, Walmart, which in 2021 used weR's platform to intertwine educational content across their stores to encourage proper nutrition among children, featuring characters from popular children's food series, Waffles + Mochi airing on Netflix. The company also collaborates with Apple, Google, Samsung and Qualcomm to adapt the platform to different devices and operating systems.

Clickable World

The user interface in these collaborations with Walmart and Super-Pharm is the consumer's personal smartphone, currently equipped with a camera, depth sensors, and processing abilities that suffice to project relatively high-quality AR content. However, the holy grail for the entire AR/VR world is the development of convenient and accessible digital glasses that allow us to travel and consume AR content.

In this context, it is impossible not to mention Vision Pro, Apple's Mixed Reality glasses launched earlier this month, which signals to the entire industry the direction this technology is headed. Vision Pro also crystalizes weR's vision. Chachek: "For the first time, Apple's Vision Pro demonstrates to people the potential of AR. As these devices become more compact, people will start using their glasses outside. We want to be the platform that interfaces between people walking down the street or in the mall and the retail world surrounding them."

The primary monetization tool for commercial companies in the digital world is our ability to 'click' a product or a service we wish to purchase or get information about. A platform like weR's makes the physical space 'clickable.' "The tech giants make their fortunes from our clicks. Until now, this business model exists only in the virtual space. AR also allows one to click on objects and products in real life – and that will be a revolution."

Subscribe to Israel Hayom's daily newsletter and never miss our top stories!

The post New cellphone feature could help you find products hiding on store shelves appeared first on www.israelhayom.com.

]]>
https://www.israelhayom.com/2024/02/29/new-cellphone-feature-could-help-you-find-products-hiding-on-store-shelves/feed/