Network World's Avatar

Network World

@networkworld.com.web.brid.gy

Network World provides news and analysis of enterprise data center technologies, including networking, storage, servers and virtualization. [bridged from https://networkworld.com/ on the web: https://fed.brid.gy/web/networkworld.com ]

6 Followers  |  0 Following  |  1,399 Posts  |  Joined: 05.02.2025
Posts Following

Posts by Network World (@networkworld.com.web.brid.gy)

Preview
Lack of regulatory action on hyperscaler dominance prompts inquiry chair to quit Delays in regulatory action to deal with imbalances in the market for cloud services has prompted the resignation of the chair of an inquiry into the market. Companies deploying cloud services are being hampered by the dominance of Microsoft Azure and Amazon Web Services, a situation exacerbated by the glacial pace in which the UK’s Competition and Markets Authority (CMA) is reacting to the recommendations of its own inquiry into the cloud industry. Now the chair of that inquiry, Kip Meek, has quit the role in protest at the lack of action. Earlier this week Meek told AI publication The Morning Intelligence, the first to report his resignation, “I shared concerns at the time that the CMA was taking a long time to pick up the recommendations of our report. I’m still concerned that the pace is going slowly.” Although the CMA only monitors UK markets, its investigation has been closely watched by regulators and industry bodies in other countries, including the Federal Trade Commission in the US and others in Europe. Concerns about the delay in dealing with the situation were echoed by industry figures on both sides of the Atlantic. “Glacial pace is about right. Every month that goes past without something happening, the two big guys are going to get more and more entrenched. How can it make nine months to decide to what to do? It’s still nowhere near resolution,” said David Terrar, CEO of the Tech Industry Forum. ## Transatlantic concerns The frustration was also felt by Nicky Stewart, senior adviser to the Open Cloud Coalition. “All of this was kicked off back in October 2022, when the state of the cloud industry was referred to the CMA. We’re now three-and-a-half years into the process and nothing is happening. Microsoft and AWS still have between 70% and 90% of the cloud market between them,” she said. “The report that the CMA produced was a really comprehensive one, completely understanding the nature of the industry. We’ve been at the sharp end of uncompetitive behavior for some time,” she added. And concerns have also been expressed in the US. “Kip Meek’s resignation highlights a stark reality: Diagnosing a potentially flawed, highly concentrated cloud market is useless if the watchdog lacks the urgency to address it. Right now, the hyperscalers are operating business-as-usual while the CMA hits the snooze button,” said Dave McCarthy, research vice president at IDC. Regulators across the globe are currently investigating the cloud market. Last month, the US Federal Trade Commission opened an investigation into Microsoft’s position and whether it had an unfair advantage against other cloud competitors. And in November last year, the European Commission opened three market investigations on cloud computing services under the Digital Markets Act (DMA), including an investigation as to whether the DMA can effectively tackle practices that may limit competitiveness and fairness in the cloud computing sector in the EU.” Stewart highlighted the EC’s action. “The commission kicked off three inquiries last autumn and they’re due to make an interim report in May or June. They may well get there before the CMA, which started three years earlier,” she said. The situation needs to be resolved quickly given the increasing importance of AI in today’s market and the need for competitive cloud services to support it, said Terrar: “AI, particularly agentic AI, is going to change the cloud market. We’re going to see some changes, for example, more processing at the edge, and the cloud infrastructure is so fundamental to the industry today.” And, of course, there’s the additional cost, said Stewart: “There was a footnote in the CMA report that the UK is paying about £500m too more for cloud, because of the dominance of the big players: there’s a need for more competition.” ## Regulatory foot-dragging McCarthy also highlighted the impact on enterprises. “For enterprise users, this regulatory foot-dragging has tangible impacts. When dominant players face no immediate threat of intervention regarding egress fees or restrictive software licensing, customers are stripped of their negotiating leverage,” he said. There are signs that the governments around the world are changing. “It’s not just Europe and the US that are acting against the dominant players; we’ve seen governments in South America and South Africa looking into it too. People are waking up and smelling the coffee,” The CMA has said that a decision on cloud will be made by the end of this month. It’s fair to say that a lot of the industry won’t be expecting immediate changes.
05.03.2026 12:08 — 👍 0    🔁 0    💬 0    📌 0
Preview
Digital sovereignty options for on-prem deployments France made headlines recently when it announced it would ditch Microsoft Teams and Zoom for government use in favor of the French-made Visio platform. The decision followed a similar move by Austria’s Ministry of Economy, Energy, and Tourism, which rejected Teams in favor of an open-source collaboration suite running on the Ministry’s own servers. A large factor underlying both decisions is digital sovereignty. Especially in Europe, laws and regulations require organizations to exercise strict control over not just data but also the applications and infrastructure used to process it. Geopolitics also plays a role. In Austria, the Ministry decided Teams posed an unacceptable risk that calls and message data could be subject to access requests from the U.S. government. “If a security agency from the U.S. wants to force a U.S. vendor to pull out data, then they have to do this,” said Florian Zinnagl, the Ministry’s CISO, told Computerworld. While much of the digital sovereignty discussion focuses on cloud-based services that meet strict privacy criteria, for some the remedy involves on-premises infrastructure. It’s a strategy that puts all IT resources—network, compute, and data—firmly in the organization’s control. “Sovereign on-premises solutions make sense when enterprises need control not just over data location, but over who governs execution, policy, and AI decision-making inside a specific jurisdiction,” says Stephanie Walter, practice leader for AI stack at HyperFRAME Research. “Hyperscalers can satisfy residency requirements, but they typically retain ownership of the control plane. Think about feeding prompts and data into a [large language model]. Who owns the data now? In regulated sectors, that’s often the real constraint.” In terms of what makes one approach better than another, Walter favors those that “treat sovereignty as a full-stack architecture problem rather than a compliance feature layered onto existing infrastructure. Notable offerings include Fortinet and Versa Networks, and IBM at the infrastructure level.” When it comes to choosing among the sovereign computing offerings, “enterprises should prioritize transparency of model behavior, data lineage, inference execution, and access,” Walters says. “If the enterprise cannot independently audit or govern AI operations within its jurisdiction, the solution is sovereign in name only.” With that in mind, here’s a rundown of premises-based sovereign computing offerings from four vendors: Cisco, IBM, Fortinet, and Versa Networks. ## Cisco targets Europe with sovereign infrastructure Cisco launched its Sovereign Critical Infrastructure portfolio in September 2025, to address European customers’ needs for more control and autonomy over their digital infrastructure and data, according to the vendor. The portfolio spans Cisco’s core product lines, including routing, switching, wireless, collaboration, and select endpoint devices, as well as Cisco and Splunk security solutions. Notably absent is the Cisco Unified Computing System (UCS), the company’s integrated data center platform that combines computing, networking, storage, and virtualization capabilities. Products under the Sovereign Critical Infrastructure portfolio run under a special license, in air-gapped environments on customer premises, meaning they are physically isolated from outside networks, including the internet. As such, Cisco cannot connect to the systems. “Cisco will not be capable of remotely disabling products. This puts control in the hands of customers,” as Cisco puts it. The solution addresses a number of security and privacy issues, including control over encryption keys, a primary concern for sovereign computing. It also addresses the Austrian CISO’s concern that a U.S. company can unilaterally pull out data. On the other hand, given Cisco cannot connect to the systems, the onus is on the customer to implement software updates, including security patches. That’s a responsibility that organizations may have to accept in the name of digital sovereignty. “Operational resilience is key for these organizations, who seek the extra controls, protections, and autonomy that genuine digital sovereignty solutions can bring,” said Rahiel Nasir, research director for European cloud and lead analyst for worldwide digital sovereignty at IDC, in a statement accompanying Cisco’s announcement. “This is especially true when it comes to network sovereignty—a challenge that few network infrastructure providers thus far have been able to address.” Cisco says its offering aligns with “key foundational, EU and country certifications and standards.” The company also says it has a roadmap for achieving the new European Union Cybersecurity Certification (EUCC), a unified security benchmark for IT products and services. To gain the voluntary certification, companies must complete an evaluation that assures is products comply with the framework. ## IBM puts AI front and center in “principled” sovereignty approach In January of this year, IBM announced its Sovereign Core software. IBM frames the issue in terms of artificial intelligence, which it says extends the sovereignty discussion beyond its initial focus on data residency. To address AI issues, IBM says, sovereignty must include: who operates the platform and under which authority; where AI models run and how inference is governed; who has administrative access and how it is enforced; and how compliance can be demonstrated continuously, not just documented. IBM’s announcement differs from Cisco’s licensing-focused approach. “A fundamental architectural shift is required: one where sovereignty is an inherent property of the platform itself, not a contractual promise or deployment variant,” IBM says. The approach IBM espouses is based on three principles it laid out, beginning with the notion that sovereignty is a platform capability, and it must be provable. “With IBM Sovereign Core, sovereignty is enforced architecturally, not contractually,” IBM says. It is built on what is calls “transparent technologies” like Red Hat OpenShift. Sovereign Core likewise operates in an air-gapped environment that functions like SaaS but is fully under the customer’s authority. “Identity, encryption keys, logs, telemetry and audit evidence remain entirely within the sovereign boundary. Ongoing compliance capabilities are embedded directly into the software, enabling organizations to produce regulator-ready proof on demand, without manual, audit-driven processes,” IBM says. IBM’s second principle is that AI sovereignty is a first-class system property. Its approach enables organizations to deploy CPU- and GPU-based clusters, and approved open or proprietary models, all governed through controlled gateways. “AI inference and agent-based applications run locally, without exporting data or telemetry to external providers,” IBM says. Operational activity is continuously monitored and recorded, “creating a clear audit trail for AI systems operating in high-impact and regulated domains.” Thirdly, sovereignty is to be operationalized for speed and scale. A single customer-operated control plane enables customers to centrally operate “thousands of cores and hundreds of nodes with different sovereign requirements,” IBM says. Automated configuration is built in, ensuring identity, security, and compliance, while self-service provisioning for CPU, GPU, VM, and AI inference environments eases deployment. ## Fortinet tackles sovereignty on the SASE front Secure access service edge (SASE) vendors are likewise getting into the sovereignty game. Sovereign SASE can help organizations control access, handle policy, and enforce security boundaries, notes HyperFRAME Research’s Walter. But, without sovereign infrastructure underneath it, enterprises still don’t control how workloads actually execute. “True sovereignty is about owning the control plane, particularly with AI,” Walter says. “Organizations need to think about who defines policy, governs models, audits behavior, and controls operational visibility.” Fortinet announced its sovereign SASE offering back in August 2024. Overall, SASE is a key driver of Fortinet’s 15% rise in fourth-quarter 2025 revenue, driven in no small part by sovereign SASE, Fortinet co-founder and CEO Ken Xie told analysts on a recent earnings call. “We are seeing strong demand in sovereign SASE,” Xie said on the call. Ultimately, “I believe the sovereign SASE market [is] probably even bigger than the current public SASE [market].” FortiSASE Sovereign is a turnkey private SASE solution that enables organizations to provide a customized SASE service in a private cloud or on-premises. Users maintain full control over all SASE features, architecture, and deployment, Fortinet says. To address data sovereignty requirements, Fortinet’s solution ensures data is stored within user-specified geographic regions. It offers granular access control and identity management protocols to ensure only authorized users can access sensitive data, as well as regional encryption key management options, enabling organizations to meet their specific data protection requirements while mitigating the risk of unauthorized access. And, like IBM, FortiSASE Sovereign provides continuous monitoring and auditing mechanisms. ## Versa Sovereign SASE gives enterprises full control In February of last year, Versa likewise announced a sovereign version of its SASE service. Versa Sovereign SASE enables enterprises to deploy the VersaONE Universal SASE Platform on their own infrastructure and networks in a customizable manner. “With Versa Sovereign SASE, enterprises get all the benefits of the VersaONE Universal SASE Platform along with complete control in design, implementation, and operations,” the company says. For example, Versa Sovereign SASE enables organizations to maintain full control over data flows, access policies, and user activities, including in air-gapped deployments. Features including configurable geofencing and language localization help customers meet sovereignty and compliance requirements, Versa says. The solution builds on a partnership with Lumen announced in 2024 through which the Versa SASE platform is deployed on dedicated gateways in Lumen’s global points of presence or on customer premises.
05.03.2026 10:00 — 👍 0    🔁 0    💬 0    📌 0
Preview
Cato Networks brings adaptive threat defense to SASE Cato Networks has introduced what it calls an auto-adaptive threat prevention engine, designed to stop multi-stage attacks before they cause damage or disruption. Cato Dynamic Prevention is integrated into the vendor’s secure access service edge (SASE) platform. It addresses attacks that unfold gradually and appear harmless when viewed as isolated events. Rather than relying solely on point-in-time inspection or static rules, the engine analyzes long-term behavioral patterns and correlates signals across multiple security controls to detect suspicious activity earlier in the attack chain, according to Cato Networks. “Threat actors abuse trusted tools and valid credentials, knowing most defenses still analyze isolated events and rely on humans to connect dots for more complex attack chains,” said Lior Cohen, vice president of product management, security and management at Cato Networks, in a statement. “Cato Dynamic Prevention changes the game by continuously understanding behavior in context, predicting the threat actor’s next move, and enforcing protection automatically that would only impact true positive threats. As a result, this stops potential threats before a breach ever takes shape.” Cato Dynamic Prevention monitors network and security activity across users, devices, and sites over extended periods. When it identifies patterns consistent with malicious behavior, it automatically applies adaptive controls to block or restrict high-risk actions, without requiring manual intervention from IT or security teams. According to the company, this approach targets threat actors who use legitimate credentials and trusted tools and spread activity across days or weeks. Individually, those actions may not trigger alerts. In environments built on disconnected point products, correlating those signals can be slow and resource-intensive, often delaying response until later stages of an attack, according to the company. “Legacy security tools are built to spot obvious, point-in-time indicators, signatures, known bad IPs, or isolated anomalies. But modern attacks are engineered to look routine: they use legitimate admin tools, spread activity ‘low and slow,’ and break intrusion into small steps that appear harmless individually,” wrote Makiko Yamada, product marketing manager at Cato Networks, in a company blog. “The result is a flood of weak alerts and delayed action, leaving teams to manually connect the dots after the attacker has already moved.” Because the capability operates within Cato’s cloud-native SASE architecture, it can also draw from telemetry generated by built-in services such as intrusion prevention, anti-malware, secure web gateway, and data loss prevention. The company says this unified visibility enables deeper context and more accurate correlation. Yamada explained: “The key is correlation: one internal scan might be an IT task; one remote execution command might be standard operations; one unusual authentication might be a user traveling. However, when these events occur in a suspicious sequence across multiple hosts and networks, the combined pattern becomes harder to dismiss.” Dynamic Prevention is generally available now as part of the Cato SASE Cloud Platform, which runs on a private global backbone of more than 90 points of presence (PoP) connected via multiple SLA-backed network providers.
04.03.2026 19:12 — 👍 0    🔁 0    💬 0    📌 0
Preview
AWS Middle East outage: a reminder not to rely on cloud as disaster recovery plan AWS customers based in the Middle East (ME) are struggling to recover services following drone attacks on the cloud company’s ME data centers on March 1. Two availability zones in the UAE and one in Bahrain were impacted. The company has been providing regular updates as it works to restore operations, but has advised customers with workloads running in the Middle East to take action now to migrate those workloads to alternate AWS Regions. “Customers should enact their disaster recovery (DR) plans, recover from remote backups stored in other regions, and update their applications to direct traffic away from the affected regions,” the company said. The ferocity of the attacks has exposed the inadequacy of some companies’ DR plans. ## Need a ‘blast radius audit’ “This attack exposes something most enterprises have been getting wrong for years,” said Cisco’s Nik Kale, principal engineer, CX engineering. “DR plans are written around the assumption that failures are localized and technical — a power outage, a cooling failure, maybe a fiber cut. What happened this week is a region-level event driven by geopolitics, not infrastructure failure. If your disaster recovery plan doesn’t account for the possibility that an entire geographic region becomes operationally hostile overnight, you don’t have a disaster recovery plan. You have a maintenance play book.” The attacks were not the sort of failure that most companies had prepared for, Kale admitted. “[But] enterprise architects need to be running what I’d call a ‘blast radius audit,’ mapping every critical workload to its physical region, identifying which services have single-region dependencies, and pressure-testing whether failover actually works when an entire region goes dark, not just when a single zone hiccups,” he said. “The enterprises that will come through events like this aren’t the ones with the thickest DR binders, they’re the ones who’ve actually failed over to another continent.” ## Activate DR plans now AWS ME customers who haven’t already implemented comprehensive DR responses need to activate their plans immediately, advised Brad Lassiter CEO at IT services company Last Tech. “Customers need to failover to other regions and availability zones and check DNS and routing rules. Lower time to live (TTL) wherever possible so that the network can change traffic patterns as needed,” he said, adding that enterprises also need to shift to manual operations to verify high value transactions. Those businesses looking for legal remedies to recover costs from the outages may be disappointed, said Frank Jennings, partner at HCR Legal, a lawyer specializing in cloud law. “Most AWS users probably didn’t check their SLA for outages caused by drone strikes! Nevertheless, most cloud SLAs will expressly exclude from their uptime commitments any downtime caused by events outside the provider’s reasonable control (a ‘force majeure’ event), including natural disasters, acts of terrorism, or war,” he said. He said, however, that definitions of “force majeure” are often vague. “Its scope depends on the specific wording of the clause in question,” he noted. Jennings advised AWS customers (and users of other hyperscalers’ services) to check their contracts, and not to “treat cloud service agreements as low-risk commodity purchases.” The force majeure clause, the SLA exclusions and the limitation of liability provisions all warrant close scrutiny at the point of contracting, he pointed out. ## Re-evaluate cloud plans The ME attacks will certainly force many organizations to rethink their plans going forward, Kale observed. “Most enterprises pick cloud regions based on latency and pricing” he said. “Almost nobody runs a geopolitical threat model against their region selection the way they’d run a capacity model. This week proved that your cloud region is a geopolitical decision whether you treat it as one or not.” He noted that AWS’s own guidance is telling customers to do what they should have architected for from day one: have workload portability across regions, keep remote backups stored outside the blast radius, and have application-level traffic steering that doesn’t depend on the affected region being reachable. AWS said it is making progress restoring services. In its bulletin at 8.14 a.m. PST on March 3, it said, “For Amazon S3, we are seeing continued improvement in PUT and LIST availability. Newly written objects are now able to be successfully retrieved.” It said it was still working on DynamoDB; other services would follow when this was restored, but EC2 instances remain throttled in the region.
04.03.2026 01:49 — 👍 0    🔁 0    💬 0    📌 0
Preview
Cisco: AI is a double-edged sword in industrial networks AI can be a double-edged sword for industrial networking teams, creating both problems and benefits, according to Cisco’s newly released 2026 State of Industrial AI Report. For example, AI cybersecurity is both the biggest barrier and the top asset for industrial networking teams, according to the 1,000 industry professionals who were surveyed for the report. Among respondents, 40% cite cybersecurity concerns as a top obstacle to AI adoption, and 48% identify security as their biggest networking challenge. At the same time, 85% expect AI to improve their overall cybersecurity posture. “While security gaps are limiting AI scale today, organizations view AI as a tool to strengthen detection, monitoring, and resilience,” Cisco stated. The vendor teamed with Sapio Research to conduct the survey, and the respondents span 19 countries and operate in 21 industrial sectors, including manufacturing, utilities and transportation. Industrial cybersecurity ranks as the second most important area for AI investment. “This prioritization indicates that organizations are investing in AI to improve cyber resilience,” Cisco wrote. “AI will play a dual role in industrial environments: increasing the need for secure-by-design architectures while also enabling stronger, more adaptive defenses at scale.” Cybersecurity must be treated as a baseline requirement for AI-ready environments, not a downstream control, the report stated. Another priority is collaboration between IT and OT teams. Without it, companies will slow the impact of AI. “IT/OT collaboration is essential for AI impact. AI scale is as much an organizational challenge as a technical one: collaboration enables speed, confidence, and repeatability,” the report states. As IT and OT teams work more closely together, cyber risks become more visible, not smaller—an essential step toward building resilient, AI-ready industrial environments, Cisco stated. Yet today, only 20% of organizations report fully collaborative IT/OT interworking on cybersecurity, the report states. The 2026 report also shows how quickly AI has become the main topic of conversation among industrial networking teams. For example, the survey found 61% of respondents are actively deploying AI in industrial environments, and only 20% report mature, scaled adoption. Interestingly, in its 2024 report, Cisco found that a shortage of skilled workers was the number one challenge facing AI adoption. That challenge has fallen to third place in 2026, with AI technology integration capturing the second slot. Some other insights from the Cisco research include: * 51% of respondents anticipate significant increases in connectivity and reliability requirements, while 96% say wireless networking reliability is critical to enabling industrial AI—making it foundational to network readiness at scale. * Greater edge compute capacity (44%), bandwidth (42%), and mobility (40%) are top network requirements for AI at scale. * AI workloads introduce new performance, power, and reliability requirements that exceed traditional industrial network design assumptions. Among respondents, 97% expect AI workloads to have an impact on their industrial networks. * Scaling AI requires shifting from human-in-the-loop workflows to machine-to-machine decisioning— driving investment in connectivity, edge, and data infrastructure. * Effective collaboration between IT and OT teams directly impacts AI outcomes. But 43% continue to operate with limited or no IT/ OT cooperation. Disparate teams slow AI deployment and increase operational risk, while IT/OT alignment accelerates scalability, stability, and security. IBM’s X Force security outfit recently wrote that AI is no longer an emerging concept in cybersecurity: “It’s a force multiplier actively used by both defenders and adversaries. Threat actors are already applying generative AI to scale phishing operations, accelerate malicious code development and enhance social engineering through improved language quality and realism. At the same time, defenders are using AI-driven analytics to process vast volumes of telemetry, identify anomalous behavior and shorten detection and response timelines.”
03.03.2026 21:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
AMD accelerates telecom network AI AMD is helping telecom operators move from AI pilots to production deployments with the transition from traditional Radio Access Network (RAN) to open, virtualized architectures. At the Mobile World Conference 2026 show AMD is showing end-to-end technologies that can carry AI projects into production, from enterprise AI software to leadership CPUs, GPUs, networking technologies and adaptive computing. “Success requires more than a model or a single layer of infrastructure: It takes an open ecosystem to develop telco-grade AI, software to operationalize it reliably, and efficient compute designed for distributed edge deployments,” the company said in a statement. AMD is participating in Open Telco AI, a new global industry initiative led by the GSMA to accelerate the development, evaluation, and adoption of artificial intelligence systems specifically tailored for the telecommunications sector (“telco-grade AI”). Open Telco AI is a collaborative, open ecosystem for building, testing, and improving AI tools that truly understand and work with telecom data and workflows. The idea is to address the limitations of general-purpose AI models like large language models when applied to telecom-specific tasks such as network operations, standards interpretation, and troubleshooting, according to the group. As part of the collaboration, AT&T is contributing Open Telco models, AMD is providing compute, and TensorWave is offering hosting infrastructure. AMD Instinct GPUs are used to train the Open Telco AI models, creating telco-focused models that others in the ecosystem can reuse and extend. These GPUs run AMD’S ROCm software stack, an open platform for training and inference. Another element of AMD’s involvement is the use of AMD’s Enterprise AI Suite, which is designed as the production layer. It connects open-source AI frameworks and generative AI models with an enterprise-ready platform tuned for AMD compute, particularly GPU-based infrastructure. The suite integrates components for model serving, validated workflows, governance capabilities, and developer environments, all running on GPU clusters at scale. It’s built with a Kubernetes-native, container-based approach intended to fit into enterprise DevOps/MLOps practices while supporting security and multiteam governance. AMD’s recently-announced EPYC 8005 server CPUs are designed for edge environments a telco will face at the edge. They are optimized for telco, with high compute density to support virtual RAN (vRAN) workloads and include compute-intensive Layer 1 processing. The processors offer support for wide thermal operating ranges, enabling OEMs to certify NEBS-compliant platforms for rugged and outdoor telco deployments, as well as small-form-factor systems.
03.03.2026 19:52 — 👍 0    🔁 0    💬 0    📌 0
Preview
Nvidia partners with optics technology vendors Lumentum and Coherent to enhance AI infrastructure Nvidia on Monday announced strategic partnerships with Lumentum Holdings and Coherent which it said are designed to accelerate the development of advanced optics technologies used in AI data center infrastructure. The agreements will see Nvidia invest $2 billion in each company to support their research and development and operations, and to build out or expand their US-based manufacturing capabilities. In its announcements, Nvidia noted that optical interconnects and advance package integration are “foundational to the next phase of AI infrastructure, as they unlock ultrahigh-bandwidth, energy-efficient connectivity across AI factories.” Each nonexclusive deal includes what Nvidia described as a “multi-billion purchase commitment and future access rights for advanced laser components,” and a $2 billion investment in each organization to support R&D, future capacity, and operations as the companies build out their US-based manufacturing capabilities. Brian Jackson, principal research director at Info-Tech Research Group, said that with the two investments, “Nvidia is laying the groundwork for its future as a competitive provider of AI training infrastructure. While Nvidia has dominated this space over the last few years with its latest GPUs serving as the backbone of frontier AI model training, in the past 12 months, we’ve seen more deals signed by major AI developers with purpose-built silicon providers like Amazon and Google.” He pointed out, “[this] indicates that alternatives to GPUs aren’t just more power-efficient ways to train AI, but also offer enough performance to satisfy best-in-class developers. Nvidia wants to make a leap ahead of the competition with its own next-gen chip manufacturing leap.” Jackson added, “it also looks like the bet will be on photon transfer optics. Photonics-based computers have been in development as prototypes for more than a decade, and seek to address the physical limitations of copper as an electrical conduit.” By relying on the transfer of light through glass, he said, “this architectural approach is more energy efficient and promises to be much faster than current chips. If Nvidia can mass-manufacture a next-generation GPU that integrates photonics right into its silicon, then they can solve a couple of big problems for AI developers: power consumption and speed.” Sanchit Vir Gogia, chief analyst at Greyhound Research, said that the dual $2 billion investment “sends a signal about AI infrastructure bottlenecks: this is the moment where the industry quietly admits that AI scaling is no longer primarily a chip story. It is a communication story.” For the last few years, he said, “the visible constraint was straightforward. Enterprises could not get enough GPUs. Hyperscalers reserved allocation. Vendors rationed supply. That was the first choke point. But once accelerators are deployed at scale, the bottleneck moves. It does not disappear.” Gogia added that in today’s AI clusters, “each accelerator depends on dozens of high-speed links to talk to its neighbours. Multiply that across the rack and you end up with thousands of interconnects operating continuously. Every one of those links draws power. Everyone introduces latency and signal integrity considerations. Everyone carries a probability of failure.” What Nvidia is signalling is that the next bottleneck is the fabric itself, he pointed out. “You can add more GPUs, but if the network layer cannot scale proportionally, utilisation falls and economics deteriorate,” he said. “The company is moving upstream to ensure the arteries of AI infrastructure do not become the new point of scarcity. This is not a marketing flourish. It is a structural admission that the networking wall is real.” Gogia noted that the emphasis on domestic manufacturing is not cosmetic language. It is strategic insulation. “Semiconductor supply chains are now entangled with national policy,” he observed. “Export controls, rare earth dependencies, and industrial subsidies have reshaped how advanced components move globally. Photonics is increasingly part of that strategic infrastructure.” By supporting US-based fabrication expansion, Nvidia “reduces geopolitical exposure and aligns with domestic industrial priorities. This positioning may influence allocation decisions during supply stress,” he said. And for enterprises operating outside the United States, “this introduces a secondary consideration,” he said. “During capacity constraints, strategically aligned markets may receive preferential treatment. Procurement strategy must therefore factor in geography and policy alignment alongside price and performance.” Regardless of location, CIOs and senior network executives planning AI factory deployments should now stop treating the optical fabric as a networking detail. “Budget assumptions should incorporate interconnect density growth, projected energy per bit efficiency, redundancy models, and vendor concentration risk,” he said. “Optical roadmap transparency should be a formal part of vendor due diligence.” “[Contracts] should address supply allocation rights and upgrade pathways,” he noted. “AI ROI models should include GPU utilization impacts tied to network performance. Sustainability reporting should account for interconnect power draw, not just server efficiency.” In addition, he said, “failure domain mapping should reflect optical integration blast radius, not just server node failure. AI infrastructure governance must evolve from server-centric thinking to system-centric planning. The fabric layer now belongs on the board agenda.”
03.03.2026 01:28 — 👍 0    🔁 0    💬 0    📌 0
Preview
Intel aims advanced Xeon 6+ at AI edge computing At the Mobile World Conference show in Barcelona, Intel showcased its most advanced processor yet, the Xeon 6+ processor, codenamed “Clearwater Forest.” Technically, it is one of Intel’s most complex chiplet designs, with a package that combines a total of 12 compute chiplets manufactured on a mix of Intel 18A node, Intel 7, and Intel 3 manufacturing processes. ##### **Related** : [**More Intel news and insights****]** Clearwater Forest supports the existing Xeon server platform socket, 12 memory channels, 96 PCIe 5.0 lanes, and 64 CXL 2.0 lanes. It supports memory up to DDR5-8000.The chip contains 288 E-cores, for Efficiency, with a high-bandwidth on-chip fabric to link two chips in a two-socket design. One of the primary target markets is cloud providers has dozens if not hundreds of virtual machines can be spun up on a single processor. But also, Intel is targeting network environments through the Radio Access Network (RAN), 5G Core, and edge, while maintaining efficiency, openness, and cost control. In a blog post announcing the Xeon 6+, Kevork Kechichian, executive vice president and general manager of the Data Center Group says this approach enables real-time inference inside virtualized RAN deployments, so data can be processed where it resides rather than moving it around. Intel has an existing partnership with telecom giant Ericsson, and the two firms announced they have extended it to cover the joint development and marketing of what they call AI-native 6G solutions. Details were scarce; the collaboration is described as advancing “future high-performance, and energy-efficient compute architectures designed for both AI for networks and Networks for AI.” AI-native 6G will combine intelligent and programmable networks with advanced compute and real-time sensing. Over time, that evolution could bring sensing and compute closer together across the network. So the two firms are not targeting one particular segment of the market, but the whole thing: connectivity, cloud, security, and compute capabilities for the RAN and packet core. The intended outcome is an architecture “that combines intelligent, programmable networks with advanced compute and real-time sensing, which will underpin more responsive, efficient and capable services, and ultimately result in closer integration between sensing and compute.” According to Intel, the Xeon 6+ series is planned for launch in the first-half 2026. #### **More Intel news:** * Intel teams with SoftBank to develop new memory type * Intel sets sights on data center GPUs amid AI-driven infrastructure shifts * Intel wrestling with CPU supply shortage * Intel’s AI pivot could make lower-end PCs scarce in 2026 * Intel nabs Qualcomm veteran to lead GPU initiative * Intel decides to keep networking business after all * Intel sees supply shortage, will prioritize data center technology
02.03.2026 20:33 — 👍 0    🔁 0    💬 0    📌 0
Preview
Nvidia partners with telecom providers for open 6G networks Nvidia has partnered with a variety of global telecom providers for a commitment to build 6G on open and secure artificial intelligence-native platforms, bringing software-defined networking to telecommunications. Announced at the Mobile World Congress conference, the list of Nvidia partners is a who’s who of telecom — Booz Allen, BT Group, Cisco, Deutsche Telekom, Ericsson, MITRE, Nokia, OCUDU Ecosystem Foundation, ODC, SK Telecom, SoftBank Corp. and T-Mobile. Initial trials for 6G are expected to start as early as 2028, and the new network is expected to launch commercially around 2030. “Unlike 5G, 6G is being born in the AI era, and the networks of today simply aren’t ready for the use cases of tomorrow,” said Ronnie Vasishta, senior vice president of telecommunications at Nvidia on conference call with the tech media. “Remember, AI did not exist when 5G was being defined. So using AI to even improve the networks wasn’t possible in that definitional phase.” ##### **Related** : [**More Nvidia news and insights****]** The company said the initiative represents a shared commitment to ensure 6G infrastructure is open, intelligent, resilient and accelerates innovation and safeguards global trust. 6G wireless networks will become the fabric for physical AI, enabling billions of autonomous machines, vehicles, sensors and robots to operate at scale. 6G wireless networks are being designed to accelerate advancements in physical AI, allowing autonomous machines, sensors, vehicles, and robots to interact with the real world. By embedding AI across the radio access network (RAN), edge and core, 6G networks must enable secure integrated sensing and communications, intelligence and decision-making while supporting interoperability, supply-chain resilience and faster innovation. Nvidia also announced a new AI-RAN collaborations with partners T-Mobile US, SoftBank and Indosat Ooredoo Hutchison, all of which have taken test systems live. “Software defined AI-RAN is no longer just a concept. It’s moving to live networks. T Mobile, Nokia, and Nvidia have completed the first live AI-RAN call using Nokia’s CUDA accelerated software running on Nvidia at their outdoor trials on live networks,” said Vasishta. This year’s MWC will see three times the number of AI-RAN innovations compared to last year, with 26 out of 33 AI-RAN Alliance demos built using Nvidia AI Aerial and a software-defined architecture. #### **More Nvidia news:** * Nvidia plans a Windows PC SoC, setting up direct competition with Qualcomm, Intel, and AMD * Nvidia lines up partners to boost security for industrial operations * Meta scoops up more of Nvidia’s AI chip output * Reports of Nvidia/OpenAI deal in jeopardy are overblown, says Nvidia’s CEO * Eying AI factories, Nvidia buys bigger stake in CoreWeave * China clears Nvidia H200 sales to tech giants, reshaping AI data center plans * Nvidia is still working with suppliers on RAM chips for Rubin * RISC-V chip designer SiFive integrates Nvidia NVLink Fusion to power AI data centers * Nvidia H200 chips in China: US says yes, China says no * Lenovo-Nvidia partnership targets faster AI infrastructure rollouts
02.03.2026 17:16 — 👍 0    🔁 0    💬 0    📌 0
Preview
Why network bandwidth matters a lot What do enterprises wish for the most when it comes to networking? Ok, if you guessed “that is could be free” you’d be right, but they don’t really think that’s realistic. Their biggest feasible wish is **more capacity**. Networks push bits, and of 372 enterprises who offered comments on their 2026 wishes, 328 put more capacity at the top. It’s not all about AI either. This group thinks that, while there’s no universal cure for network ills, having more capacity comes close. So why is that? Enterprises have three networks: the data center network, the WAN or VPN, and the LANs that connect workers. To most enterprises, the data center network is the primary focus, likely because that’s where most of their capex is targeted. Two-thirds of enterprises say that their entire networking strategy is based on their data center network, which in turn is based on their hosting platform and application requirements. All the enterprises who commented agreed that the data center network is absolutely critical—a problem there means their overall network plan is in jeopardy. In the data center, of the 328 of those who said more capacity was important for their networks, almost half said that they had added capacity in the last two years, and 30% said they planned more increases in 2026. You might expect that AI is the big driver, but it was cited as the top reason by only 11% of the enterprises. What is the top reason to add capacity? To eliminate complaints about quality of experience (QoE). Almost 80% of enterprises overall tell me that the most difficult and expensive netops mission is **responding to user complaints about QoE**. That’s not surprising if you think about it, because unlike a real network fault, a QoE complaint doesn’t have an immediate technical symptom to point personnel to a target problem. You have to dig, and because QoE issues are often transient, and are rarely reported immediately, they may be difficult or impossible to diagnose. But there’s a deeper link to capacity, say enterprises. Congestion and latency, they say, end up being at the heart of over half of reported QoE problems, which means that additional network capacity would likely have prevented them. Not only that, even network faults that result in application performance or availability complaints might be solved with more robust alternate routing options, ones that didn’t result in overloading alternate paths. But why focus on the data center when user LAN connections and VPNs are also potential contributors to the problem of QoE? Is it just because, in today’s networks, most capex goes to the data center rather than to LANs and VPNs? Not according to enterprises. Of the 328 users who valued network capacity highly, only a quarter said that they had any issues with VPN or worker-LAN performance. Where such issues exist, they’ve tended to be localized to a small number of workers, a small number of locations. A data center network problem hits everyone, everywhere. One interesting point about VPNs is raised by fully a third of capacity-hungry enterprises: SD-WAN is the cheapest and easiest way to increase capacity to remote sites. Yes, service reliability of broadband Internet access for these sites is highly variable, so enterprises say they need to pilot test in a target area to determine whether even business-broadband Internet is reliable enough, but if it is, high capacity is both available and cheap. Clearly data center networking is taking the prime position in enterprise network planning, even without any contribution from AI. Will AI contribute? Enterprises generally believe that self-hosted AI will indeed require more network bandwidth, but again think this will be largely confined to the data center. AI, they say, has a broader and less predictable appetite for data, and business applications involving the data that’s subject to governance, or that’s already data-center hosted, are likely to be hosted proximate to the data. That was true for traditional software, and it’s likely just as true for AI. Yes, but…today, three times as many enterprises say that they’d use AI needs simply to boost justification for capacity expansion as think they currently need it. AI hype has entered, and perhaps even dominates, capital network project justifications. These capacity trends don’t impact enterprises alone, they also reshape the equipment space. Only 9% of enterprises say they have invested in white-box devices to build capacity and data center configuration flexibility, but the number that say they would evaluate them in 2026 is double that. This may be what’s behind Cisco’s decision to push its new G300 chip. AI’s role in capital project justifications may also be why Cisco positions the G300 so aggressively as an AI facilitator. Make no mistake, though; this is really all about capacity and QoE, even for AI.
02.03.2026 15:14 — 👍 0    🔁 0    💬 0    📌 0
Preview
OpenAI launches stateful AI on AWS, signaling a control plane power shift Stateless AI, in which a model offers one-off answers without context from previous sessions, can be helpful in the short-term but lacking for more complex, multi-step scenarios. To overcome these limitations, OpenAI is introducing what it is calling, naturally, “stateful AI.” The company has announced that it will soon offer a stateful runtime environment in partnership with Amazon, built to simplify the process of getting AI agents into production. It will run natively on Amazon Bedrock, be tailored for agentic workflows, and optimized for AWS infrastructure. Interestingly, OpenAI also felt the need to make another announcement today, underscoring the fact that nothing about other collaborations “in any way” changes the terms of its partnership with Microsoft. Azure will remain the exclusive cloud provider of stateless OpenAI APIs. “It’s a clever structural move,” said Wyatt Mayham of Northwest AI Consulting. “Everyone can claim a win, but the subtext is clear: OpenAI is becoming a multi-cloud company, and the era of exclusive AI partnerships is ending.” ## What differentiates ‘stateful’ The stateful runtime environment on Amazon Bedrock was built to execute complex steps that factor in context, OpenAI said. Models can forward memory and history, tool and workflow state, environment use, and identity and permission boundaries. This represents a new paradigm, according to analysts. Notably, stateless API calls are a “blank slate,” Mayham explained. “The model doesn’t remember what it just did, what tools it called, or where it is in a multi-step workflow.” While that’s fine for a chatbot answering one-off questions, it’s “completely inadequate” for real operational work, such as processing a customer claim that moves across five different systems, requires approvals, and takes hours or days to complete, he said. New stateful capabilities give AI agents a persistent working memory so they can carry context across steps, maintain permissions, and interact with real enterprise tools without developers having to “duct-tape stateless API calls together,” said Mayham. Further, the Bedrock foundation matters because it’s where many enterprise workloads already live, he noted. OpenAI and Amazon are meeting companies where they are, not asking them to rearchitect their security, governance, and compliance posture. This makes sophisticated AI automation accessible to mid-market companies; they will no longer need a team of engineers to “build the plumbing from scratch,” he said. Sanchit Vir Gogia, chief analyst at Greyhound Research, called stateful runtime environments “a control plane shift.” Stateless can be “elegant” for single interactions such as summarization, code assistance, drafting, or isolated tool invocation. But stateful environments give enterprises a “managed orchestration substrate,” he noted. This supports real enterprise workflows involving chained tool calls, long running processes, human approvals, system identity propagation, retries, exception handling, and audit trails, said Gogia, while Bedrock enforces existing identity and access management (IAM) policies, virtual private cloud (VPC) boundaries, security tooling, logging standards, and compliance frameworks. “Most pilot failures happen because context resets across calls, permissions are misaligned, tokens expire mid workflow, or an agent cannot resume safely after interruption,” he said. These issues can be avoided in stateful environments. ## Factors IT decision-makers should consider However, there are second order considerations for enterprises, Gogia emphasized. Notably, state persistence increases the attack surface area. This means persistent memory must be encrypted, governed, and auditable, and tool invocation boundaries should be “tightly controlled.” Further, workflow replay mechanisms must be deterministic, and observability granular enough to satisfy regulators. There is also a “subtle lock in dimension,” said Gogia. Portability can decrease when orchestration moves inside a hyperscaler native runtime. CIOs need to consider whether their future agent architecture remains cloud portable or becomes anchored in AWS’ environment. Ultimately, this new offering represents a market pivot, he said: The intelligence layer is being commoditized. “We are moving from a model race to a control plane race,” said Gogia. The strategic question now isn’t about which model is smartest. It is: “Which runtime stack guarantees continuity, auditability, and operational resilience at scale?” ## Partnership with Microsoft still ‘strong and central’ Today’s joint announcement from Microsoft and OpenAI about their partnership echoes OpenAI’s similar reaffirmation of the collaboration in October 2025. The partnership remains “strong and central,” and the two companies went so far as to call it “one of the most consequential collaborations in technology,” focused on research, engineering, and product development. The companies emphasized that: * Microsoft maintains an exclusive license and access to intellectual property (IP) across OpenAI models and products. * OpenAI’s Frontier and other first-party products will continue to be hosted on Azure. * The contractual definition of artificial general intelligence (AGI) and the “process for determining if it has been achieved” is unchanged. * An ongoing revenue share arrangement will stay the same; this agreement has always included revenue-sharing from partnerships between OpenAI and other cloud providers. * OpenAI has the flexibility to commit to compute elsewhere, including through infrastructure initiatives like the Stargate project. * Both companies can independently pursue new opportunities. “That joint statement reads like it was drafted by three law firms simultaneously, and that’s the point,” says Mayham. The anchor of the agreement is that Azure remains the exclusive cloud provider of stateless OpenAI APIs. This allows OpenAI to establish a new category on AWS that falls outside of Microsoft’s reach, he said. OpenAI is ultimately “walking a tightrope,” because it should expand distribution beyond Azure to reach AWS customers, which comprise a massive portion of the enterprise market, he noted. At the same time, they have to ensure Microsoft doesn’t feel like its $135 billion investment “just got diluted in strategic value.” Gogia called the statement “structural reassurance.” OpenAI must grow distribution across clouds because enterprise buyers are demanding multi-cloud flexibility. They don’t want to be confined to a single cloud; they want architectural optionality.” Also, he noted, “CIOs and boards do not want vendor instability. Hyperscaler conflict risk is now a board level concern.” ## New infusion of funding (again) Meanwhile, new $110 billion in funding from Nvidia, SoftBank, and Amazon will allow OpenAI to expand its global reach and “deepen” its infrastructure, the company says. Importantly, the funding includes the use of 3GW of dedicated inference capacity and 2 GW of training on Nvidia’s Vera Rubin systems. This builds on the Hopper and Blackwell systems already in operation across Microsoft, Oracle Cloud Infrastructure (OCI), and CoreWeave. Mayham called this “the headline within the headline.” “Cash doesn’t build AI products; compute does,” he said. Right now, access to next-generation Nvidia hardware is the “true bottleneck for every AI company on the planet.” OpenAI is essentially locking in a “guaranteed supply line” for the chips that power everything it does. The money from all three companies funds operations and infrastructure, but the Nvidia capacity and training allows OpenAI to use infrastructure at the frontier, said Mayham. “If you can’t get the processors, the cash is just sitting in a bank account.” Inference is now one of the biggest cost drivers in AI, and Gogia noted that frontier AI systems are constrained by physical infrastructure; GPUs, high bandwidth memory (HBM), high speed interconnects, and other hardware, as well as grid level power capacity. Are all finite resources. The current moves embed OpenAI deeper into the infrastructure stack, but the risk is concentration. When compute control centralizes among a small cluster of hyperscalers and chip vendors, the system can become fragile. To protect themselves, Gogia advised enterprises to monitor supply chain concentration. “In strategic terms, however, this move strengthens OpenAI’s durability,” he said. “It secures the physical substrate required to sustain frontier model scaling and enterprise inference growth.” _This article originally appeared onInfoWorld._
28.02.2026 01:48 — 👍 0    🔁 0    💬 0    📌 0
Preview
Security hole could let hackers take over Juniper Networks PTX core routers Network admins with Juniper PTX series routers in their environments are being warned to patch immediately, because a newly-discovered critical vulnerability could lead to an unauthenticated threat actor running code with root privileges. _T_ he hole is “especially dangerous, because these devices often sit in the middle of the network, not on the fringes,” said Piyush Sharma, CEO of Tuskira _. “_ If an attacker gains control of a PTX, the impact is bigger than a single device compromise because it can become a traffic vantage point and a control point at the same time. This opens the door to the stealthy interception of data flows, controller redirected traffic, or easy pivots into adjacent networks.” This issue affects PTX routers running versions of the Junos OS Evolved operating system earlier than 25.4R1-S1-EVO and 25.4R2-EVO. It doesn’t affect the standard Junos OS. In a notice, Juniper said it isn’t aware of any malicious exploitation of this vulnerability. The hole was found during internal product security testing or research. The PTX line is a series of modular high performance core routers powered by HPE Juniper Networks’ latest generation of custom Express family ASICs and optimized for 400G and 800G migrations. They offer native 400G and 800G inline MACsec, deep buffering and flexible filtering. The company says they are built for longevity in demanding WAN (wide area network) and data center use cases and deployment scenarios, including core, peering, data center interconnect, data center edge, metro aggregation, and AI data center networking. In its notice, Juniper says an Incorrect Permission Assignment for Critical Resource vulnerability in the On-Box Anomaly detection framework of the operating system allows an unauthenticated, network-based attacker to execute code as root. The detection framework is enabled by default. “The On-Box Anomaly detection framework should only be reachable by other internal processes over the internal routing instance, but not over an externally exposed port,” the alert adds. “With the ability to access and manipulate the service to execute code as root, a remote attacker can take complete control of the device.” To resolve the issue, admins should make sure version 25.4R1-S1-EVO of Junos OS Evolved is installed. They should also note that versions 25.4R2-EVO and 26.2R1-EVO are on the way. If the update can’t be installed immediately, admins should use access control lists or firewall filters to limit access to only trusted networks and hosts, to reduce the risk of exploitation of this issue. Ensure such filters only permit explicitly required connections and block all others. Another option is to disable the service by entering _request pfe anomalies disable_ in the operating system’s command line. Sharma said Juniper vulnerabilities have attracted a lot of attention from hackers over the years because of the premium positioning the routers give if long-term footholds are established. “As a network operating system, Junos sits at the crossroads of major control points like identity, policy, and traffic, which means a single exploit can scale quickly across valuable networks,” he said. “Additionally, these footholds provide attackers a longer window to find and exploit vulnerable devices, since core network gear is painful to apply patching to due to long downtimes.” To prevent vulnerabilities such as the current flaw from leading to exploitation, organizations need a defense platform that can continuously monitor for anomalies across networks and alert security teams when malicious behavior is detected, he added. Disclosure of the vulnerability comes as Juniper’s parent firm HPE prepares to introduce new PTX12000 and PTX10002 router families at next week’s Mobile World Congress. HPE bought Juniper last year. _This article originally appeared onCSOonline._
27.02.2026 21:41 — 👍 0    🔁 0    💬 0    📌 0
Preview
Enterprise Spotlight: Data Center Modernization
27.02.2026 10:00 — 👍 0    🔁 0    💬 0    📌 0
Preview
Why do data centers need so much water? Data centers are increasingly causing problems and wearing out their welcome in many localities, for a variety of reasons. The two most commonly referred to issues are power consumption driving up everyone’s electric bill, and noise from the generators disrupting surrounding neighborhoods. But there is another reason to add to the list: water consumption. According to the International Energy Agency (IEA), a typical 100-megawatt hyperscale data center consumes around 530,000 gallons of water per day, equivalent to the use of 6,500 homes. And it has to be fresh water, not ocean water. Given the drought conditions found in many western states, people and municipalities alike are not keen on a mega data center sucking up all of the drinkable water. Data center cooling has essentially followed the model of human sweat. We perspire when overheated and that water cools us as it evaporates off of our skin. A common form of cooling is latent heat evaporation, where water is used similar to sweat as it dries it cools. And this is a common method of cooling, notes Matt Green, president of Brucker, a HVAC solutions provider. “The reason why we evaporate water is because it’s a very effective way to cool and in normal comfort, cooling office buildings, things like that, the amount of water consumption we deal with is tolerable,” he said. But when you start building data centers, which are basically power plants in reverse, the amount of heat generated means we have to use substantially more water than we would normally use for a given building of the same footprint, Green added. Another legacy cooling technology in data centers is what’s called a cooling tower. A cooling tower sits outside of the main building, and the water cascades down these towers like a waterfall. However, the tower is open to the atmosphere to let natural cooling in. The churn of the water dissipates the heat, but there is significant evaporation in the process. “It evaporates a lot. I mean, we’re talking many, many Olympic swimming pools worth of water on a daily basis in some of these data centers,” said Green. “Some of the hyperscalers I work with are still using open cooling tower solutions, even today.” There were other reasons for using evaporation. For starters, evaporation equipment takes up a lot less space the chilled water equipment. Secondly is the price. Chilled water-cooling costs about 10% to 15% more than equivalent evaporation technology. But that is changing, Green notes, as more and more societal pressure, economic pressure around water consumption continues to move to the forefront, data centers are being forced to adapt. “We’re in a market now where we can use air cooled chillers that don’t evaporate water like a water-cooled chiller does, and have a very, very similar level of overall system efficiency,” he said. We are also seeing the advent of closed loop technology, where liquid is pumped into a system to absorb heat and then pumped out to be cooled and recirculated, much like a car radiator. Gamers have been on the forefront of liquid cooling and closed loop with all-in-one coolers for gaming PCs becoming standard issue now. And change is coming to the data center as well. Green says he’s seeing technologies like ultra-high efficiency air cooled chillers replace the ultra-efficient water-cooled plants that were being built. The industry is also seeing another technology called hybrid heat rejection that uses water only on the hottest, most humid days, and then can run and operate just like your radiator when it’s cooler outside. Most enterprises get along just fine with a computer room air conditioner (CRAC) or a computer room air handler (CRAH). Those have varying water consumption associated with them, too. “Sometimes we see water-cooled but more often, we just see straight air-cooled computer room air conditioners,” said Green. Enterprise data centers can stick with air cooling for now because “the densities aren’t high enough for them. They can still cool it with air for what we’re seeing today.” But many of these data centers are anticipating higher server loads, and they’re building out infrastructure to be prepared for the future. “I see that more on the CoLo side, less on the enterprise side, there’s their densities are still far too low to need to go to direct to chip cooling,” said Green.
26.02.2026 18:32 — 👍 0    🔁 0    💬 0    📌 0
Preview
ControlMonkey extends configuration disaster recovery to cloud network vendors Network resiliency is about more than just DNS redundancy and using multiple regions and providers. It also requires extending resiliency to network configuration. That’s the challenge that cloud infrastructure automation startup ControlMonkey is now taking on. ControlMonkey launched its Cloud Configuration Disaster Recovery capability in 2025, targeting AWS, Azure and GCP infrastructure. Today the company is expanding its configuration-level disaster recovery platform to the network control plane—specifically to the CDN configurations, firewall rules, DNS records, route tables and edge routing policies that sit outside the major cloud providers but are critical to production uptime. That support brings configuration backup capabilities for Cloudflare, Akamai, Fastly, and F5. The goal is to close what is likely a gap in the resilience and disaster recovery posture for many organizations. “Everybody backs up their data, right? You have to be crazy not to back up your data,” Aharon Twizer, CEO and co-founder of ControlMonkey, told _Network World_. “What about your networking configuration? If your networking is down, it’s amazing that you have data, but you’re not going to get any traffic.” ## How ControlMonkey addresses the network configuration gap The expansion came from customer requests for coverage beyond AWS, Azure, and Google Cloud Platform. Twizer, citing his own customer conversations, said the gap for third-party network vendors is larger than it is for cloud infrastructure “If you look at the third party, like 90% of the people I talk to, they don’t manage their Cloudflare with Terraform, they don’t manage their Akamai with Terraform, they don’t manage their F5 with Terraform,” Twizer noted. “So, they have zero coverage.” ControlMonkey uses the Terraform Infrastructure-as-Code (IaC) technology to define the environment. The platform connects to each supported vendor and reverse engineers live configurations into Terraform HCL code. It then creates versioned snapshots on a daily basis. The workflow has three phases. First, the platform performs a full asset inventory after connecting a vendor. Second, it identifies which resources have no code coverage and flags them for the operator. Third, it enables daily configuration snapshots so teams have a known-good state to recover from. “The way to back up your configuration is with infrastructure as code,” Twizer explained. “We specifically do that with Terraform, and our core technology, our secret sauce, is to take providers or vendors of infrastructure and reverse engineer existing configuration, live configuration, to code.” Recovery is executed through a one-click restore. When an incident occurs, the platform uses Terraform automation to provision the last known-good configuration into a second tenant. Customers can also use ControlMonkey APIs to build automated recovery playbooks triggered from external alerting tools such as PagerDuty or Datadog. ## Scope: Configuration recovery, not vendor availability To be clear, ControlMonkey isn’t a solution that will solve the issue of provider outages. The platform addresses configuration recovery, not vendor availability monitoring. The primary scenario ControlMonkey is designed for is a ransomware attack that deletes or corrupts network configurations rather than data. In that situation, workloads and data may be intact but the network control plane is gone and applications become unreachable. If there’s an outage for the vendor in general, “there’s nothing we can do about it, really,” Twizer said. “We’re looking more at ransomware, we’re looking more at cyberattacks, we’re looking more at AI agents that make mistakes and honest mistakes by employees.” The platform also does not provide multi-vendor failover recommendations. It shows recovery posture for existing vendor configurations, not routing guidance to alternative providers. ## Roadmap points beyond networking Network vendors are not the end of the expansion. Twizer said customer requests are driving coverage into additional vendor categories beyond cloud and networking, with the platform eventually targeting any third-party service that enterprises rely on for production operations. Compliance is also a factor. SOC 2 and ISO 27001 both address disaster recovery and business continuity planning, and ControlMonkey positions configuration recovery as part of that cycle alongside data protection. Twizer said the thinking behind the expansion comes back to a straightforward gap in how most organizations define resilience today. “Cyber resilience in 2026 is about data, about infrastructure, and about your network control plane,” Twizer said. “You need to have all three of them. If you just have one or two, basically, you’re not resilient.”
25.02.2026 19:43 — 👍 0    🔁 0    💬 0    📌 0
Preview
IBM X-Force: AI creates security challenges, but basic system flaws are more problematic AI tools allow attackers to identify and exploit enterprise security weaknesses faster than ever, but most network invaders still rely on unpatched vulnerabilities, credential theft, and misconfigurations to wreak havoc on corporate resources, according to IBM. The vendor today released the 2026 X-Force Threat Intelligence Index, which analyzes data from incident response engagements, the dark web, and other threat intelligence sources to uncover attack trends and patterns. IBM X-Force reports that cybercriminals are exploiting basic security gaps at dramatically higher rates, accelerated by AI tools that help attackers identify weaknesses faster than ever. “IBM X‑Force observed a 44% increase in attacks that began with the exploitation of public-facing applications, largely driven by missing authentication controls and AI-enabled vulnerability discovery,” IBM stated. However, “it’s important to acknowledge AI has not changed the fundamentals of cyberattack campaigns. Attackers still rely on unpatched vulnerabilities, valid credentials and misconfigurations to accomplish their goals. What AI has changed is the speed, scale and efficiency of these attacks, which serve to make rapid detection and decisive response more important than ever,” states the X-Force report. IBM X-Force identified systemic weaknesses in access control, credential management, and software configuration. Among the findings: * A high occurrence of exploiting incorrectly configured access control security levels suggests misconfigurations remain a primary entry point for attackers, indicating persistent gaps in governance and enforcement of security policies. * The prominence of password brute forcing and scanning for vulnerable software reflects widespread exposure due to weak authentication practices and insufficient vulnerability management. * Patterns such as privilege escalation and session hijacking demonstrate once attackers gain a foothold, they are able to move laterally and maintain persistence, amplifying the impact of initial breaches. Collectively, these trends indicate organizations face compounded risks from both preventable technical flaws and operational oversights, according to the X-Force report. It underscores the need for stronger configuration controls, proactive vulnerability management and secure development practices to mitigate recurring exploitation paths. As for the impact of AI, X-Force reports the technology is no longer an emerging concept in cybersecurity: “It’s a force multiplier actively used by both defenders and adversaries. Threat actors are already applying generative AI to scale phishing operations, accelerate malicious code development and enhance social engineering through improved language quality and realism. At the same time, defenders are using AI-driven analytics to process vast volumes of telemetry, identify anomalous behavior and shorten detection and response timelines.” “Adversaries increasingly use AI to accelerate research, analyze large data sets and iterate on attack paths in real time, allowing them to adjust tactics as conditions change rather than relying on static, preplanned actions,” the X-Force report states. “This operational flexibility increases dwell-time risk and places greater strain on security teams that depend on fixed rules, signatures or delayed analysis to detect malicious activity.” As multimodal AI models mature, X-Force states that it expects adversaries to automate complex tasks like reconnaissance and advanced ransomware attacks, driving faster-moving, more adaptive threats. Some other pertinent findings include: * X-Force identified a nearly 4x increase in large supply chain or third-party compromises since 2020, mainly driven by attackers exploiting trust relationships and CI/CD automation across development workflows and SaaS integrations. With AI-powered coding tools accelerating software creation, and occasionally introducing unvetted code, the pressure on pipelines and open‑source ecosystems is expected to grow in 2026. * Active ransomware and extortion groups surged (49%) year over year, marking ecosystem fragmentation, while publicly disclosed victim counts rose roughly 12%. * Vulnerability exploitation became the leading cause of attacks, accounting for 40% of incidents observed by X-Force in 2025. * Compromised chatbot credentials create AI-specific risks beyond simple account access. Attackers can manipulate outputs, exfiltrate sensitive data or inject malicious prompts. * Attackers are using AI to speed research, analyze large data sets and iterate on attack paths in real time. * Agentic AI has introduced new risks, and amplified others. Security leaders need a comprehensive AI governance solution to scale AI with trust and transparency. “Protecting identities has always posed a challenge. It’s about to get harder. As attackers fine-tune their credential‑driven operations, IT and security leaders must turn to AI to help them gain visibility into identity-based risks and threats across their IT landscape,” the X-Force report states. “By combining AI-powered identity threat detection and response (ITDR) and identity security posture management (ISPM) services and solutions, organizations can move more quickly and efficiently to identify vulnerabilities and prevent attacks from happening.”
25.02.2026 19:12 — 👍 0    🔁 0    💬 0    📌 0
Preview
Netskope targets AI-driven network bottlenecks with AI Fast Path Netskope has updated its NewEdge private cloud with AI Fast Path, a new solution announced this week that allows enterprises to reduce latency for AI applications while maintaining security controls. As enterprise companies continue to adopt generative AI tools, security teams are grappling with how to enforce AI governance controls without causing end users to bypass security, according to Netskope. “AI apps are sending tons of traffic and exchanging lots of data,” says Robert Arandjelovic, product and solutions marketing lead at Netskope. “There’s a natural latency there. If security adds more on top of that, users will try to work around it.” AI Fast Path focuses on optimizing traffic flows between enterprise users, the Netskope cloud, and major AI providers. Netskope says more than 90% of its 120 NewEdge data centers can now connect to leading AI applications in less than five milliseconds from the Netskope cloud, an effort aimed at minimizing added delay as traffic is inspected for data loss prevention (DLP), threat protection, and policy enforcement. “Customers realized that if they don’t adopt these AI apps, they’re probably going to be extinct in a few years. At the same time, we can’t afford to compromise on security,” Arandjelovic says. “So, with NewEdge and the AI Fast Path, we’ve created a super-optimized path where there is literally barely a bump in the wire. At the same time, they are not compromising security, because you’re passing through our cloud and getting all the benefits of our data protection and threat protection.” As a set of capabilities within NewEdge, AI Fast Path enables better performance and efficiency for AI applications. According to Netskope, AI Fast Path provides enterprises with: * Faster inference results for enterprise users from prompt to response, minimizing “time-to-first-token” (TTFT) for conversational AI. * Agentic AI optimization by accelerating complex, multi-prompt agentic workflows with the high-speed processing required for rapid, iterative AI subtasks. * Optimization of Large Language Model (LLM) performance when accessing large volumes of distributed data (for example, via Model Context Protocol gateways). * Support for Retrieval-Augmented Generation (RAG) by accelerating the connectivity between LLMs and external data sources for real-time outputs. NewEdge is Netskope’s privately built global network that carries customer traffic through more than 120 data centers worldwide before it reaches cloud and AI services. It’s a foundational part of the Netskope One secure access service edge (SASE) product. When a customer deploys Netskope, an agent on the user’s device automatically routes their web, SaaS, private app, and AI traffic into the Netskope cloud. AI Fast Path optimizations give AI workloads a direct, low-latency route to services such as Google Gemini, ChatGPT, and Claude, due in part to Netskope expanding peering relationships from 10,000 to 11,000. The AI Fast Path capabilities are included as part of the NewEdge infrastructure and available to existing customers without additional licensing, the company says.
25.02.2026 17:19 — 👍 1    🔁 0    💬 0    📌 0
Preview
AMD strikes massive AI chip deal with Meta Meta and AMD have announced the deal whereby the social media giant would purchase up to 6 gigawatts worth of CPUs and GPUs from AMD. The first GW worth of chips is set for delivery to Meta in the second half of this year and consists of a custom version of AMD’s Instinct MI450 GPU accelerators and 6th Generation AMD Epyc CPUs, codenamed “Venice” and “Verano,” so Meta is serving as the launch customer for both chips. Venice and Verano use the Zen 6 CPU core design, a ground-up new architecture optimized for data-center throughput and HPC workloads. AMD claims up to 70 % higher performance over the previous generation of Epyc under certain workloads. Zen 6 comes in two designs: Standard Zen 6 with up to 96 cores / 192 threads in high-end configurations using 12-core chiplets and Zen 6c dense cores, which have fewer features but are smaller, supporting up to 256 cores / 512 threads. It is expected to more than double per-socket memory bandwidth compared to 5th-Gen Epyc to around 1.6 TB/s. The new chips will run AMD’s ROCm software and built on the AMD Helios rack-scale architecture. AMD and Meta jointly developed Helios through the Open Compute Project to enable scalable, rack-level AI infrastructure. No dollar figures were disclosed, but according to the Wall Street Journal, the deal is worth more than $100 billion, with each gigawatt of compute alone worth tens of billions in revenue for AMD. The funding is also unique. Instead of a cash purchase, AMD has reportedly given Meta warrants to buy up to 160 million shares at $0.01 each. Stock warrants are financial instruments that give you the right (but not the obligation) to buy a company’s stock at a fixed price before a certain expiration date, according to the vendors. With 1.6 billion shares outstanding, Meta is poised to acquire 10% of AMD. But perhaps not. These shares vest only as Meta buys more computing capacity. The final tranche vests only if AMD’s stock price hits $600, according to a recent 8K filing. AMD shares are currently valued at just over $200 as of this writing. The deal is identical to the one AMD struck with OpenAI last October. That deal was also for 6 GW worth of GPUs and included a warrant for up to 160 million AMD common stock shares structured to payout once certain targets were met. Meta is not playing favorites. Last week it announced that it will also deploy standalone Nvidia Grace CPUs in its production data centers, citing greatly improved performance-per-watt. That doesn’t come as a surprise to Gaurav Gupta, vice president analyst at Gartner, who says we are compute constrained and Hyperscalers or frontier model companies will use a multisource approach to get access to compute. “No one wants to be stuck with a single vendor. Diversify and then different workloads have different compute needs.,” he said.
25.02.2026 14:26 — 👍 0    🔁 0    💬 0    📌 0
Preview
From packets to prompts: What Cisco’s AITECH certification means for IT pros Cisco’s new AI Technical Practitioner (AITECH) certification marks a key moment in AI’s transition from an interesting experiment to a core technical requirement. Unveiled at Cisco Live EMEA, the AITECH certification reinforces the idea that AI is a core skill for mainstream IT professionals, not just data scientists and ML researchers. AI is now part of the infrastructure job versus something that lives off to the side in an innovation lab. For decades, Cisco certifications have been the gold standard for network professionals who want to validate an in-depth knowledge of networking, not just Cisco technology. In fact, the Cisco Certified Internetworking Expert (CCIE) is one of the hardest to get and most sought-after certifications in all of tech. With AITECH, Cisco is using its position as a trainer to bring better practical AI knowledge to the technical people who will use it in their day-to-day jobs. Many industry watchers, including me, have stated that AI won’t take most technical jobs—rather, it’s people who know how to use AI that will. If that’s true, reskilling is imperative, but there hasn’t been a great path for industry professionals to follow to attain those AI skills. That’s the gap Cisco is trying to fill. ## Cisco AITECH explained At its core, AITECH is a role-oriented certification that validates the ability to embed AI into day-to-day technical work: coding, data analysis, automation, and workflow design. Rather than teaching candidates to build models from scratch, it focuses on using existing AI capabilities to modernize how infrastructure and operation teams deliver outcomes. The associated exam, Cisco AI Technical Practitioner (800-110 AITECH) v1.0, is a 60-minute test that measures skills in several key areas: generative AI models, prompt engineering, AI ethics and security, data research and analysis, AI for code and workflow optimization, and agentic AI. The learning path is delivered through Cisco U. and includes hands-on labs and simulations that show practical use cases across Cisco and multivendor environments. Cisco positions the AITECH learning path as a bridge from “traditional knowledge-based work” to innovation-driven roles augmented by AI, explicitly targeting professionals who need to design technical solutions, automate tasks, and lead teams using modern AI tools and methodologies. The curriculum spans AI-assisted code generation, AI-driven data analysis, model customization (including RAG), and workflow automation wrapped in governance and security best practices. ## Why this certification matters now The timing of AITECH aligns with the reality facing most IT organizations: AI is already creeping into operations, security, networking, and collaboration, but skills lag badly. Cisco explicitly describes AITECH as meant to “close the AI skills gap” and prepare technical staff to confidently embed AI into daily operations and drive adoption inside their organizations. Instead of creating yet another “AI expert” badge, Cisco is acknowledging that: * AI is becoming a first-class consumer of infrastructure resources, from GPUs to storage to high-bandwidth networking. * Network and infrastructure teams need to understand AI workflows well enough to support and optimize them, not just keep the pipes up. * Everyday technical tasks—writing code, troubleshooting, analyzing logs, creating reports—can be materially improved by AI if practitioners know how to use it safely and effectively. In that context, AITECH is less about learning isolated AI theory and more about hardening the applied AI skills that will define the next generation of infrastructure roles. For enterprises staring down a flood of AI projects, having a common competency baseline around prompt engineering, ethics, data practices, and automation is increasingly nonnegotiable. At Cisco Live, I caught up with Par Merat, vice president of learning at Cisco, and we talked about this certification and the thought process behind it. “We are focused on reskilling engineers around AI and how that can help them with their current jobs while preparing for the future,” Merat said. “This looks at every aspect of running a network—from initial design to day-to-day operations to troubleshooting and optimization.” “We introduced the AI Solutions on Cisco Infrastructure Essentials learning path last year, and we have had tremendous interest and expect the same with this,” she added. ## Who should care about AITECH Cisco’s own targeting for AITECH reads like a roster of this core Network World audience: IT and network engineers, data analysts, AIOps specialists, solutions architects, technical leads, managers, and business process analysts. In practice, three groups should be especially interested. 1. **Infrastructure and network engineers** These are the people being asked to “make AI work” in environments that were never designed for GPU-heavy, latency-sensitive workloads. AITECH gives them enough understanding of AI models, data flows, and security implications to design and operate infrastructure that is AI-ready—without forcing them to become full-time data scientists. 2. **Ops, AIOps, and automation teams** Operations teams are drowning in data and repetitive tasks, making them natural beneficiaries of AI-driven automation. The certification’s emphasis on AI-assisted code and workflow optimization, agentic AI, and data analysis directly maps to building smarter runbooks, automated remediation, and more intelligent observability and pipeline-driven automation. 3. **Technical leaders and architects** For architects and technical managers, AITECH offers a structured way to understand how AI can be safely woven into existing architectures and processes. Topics like AI ethics, security, and governance help leaders create guardrails while still encouraging experimentation and innovation across teams. Training providers outside Cisco echo this positioning, describing the certification as a minimum requirement for technical roles that involve AI-driven automation, data analytics, and solution design in modern enterprises. ## How AITECH fits Into Cisco’s broader AI strategy The AITECH certification is not meant to exist in isolation. Rather, it’s part of a broader AI-centric pivot in Cisco’s portfolio and learning ecosystem. Cisco has outlined an AI Infrastructure track that includes both the AI Technical Practitioner and the Cisco AI Infrastructure Specialist certification (which is tied into the existing CCNP Data Center path). Where AITECH focuses on applied AI skills across workflows and tools, the AI Infrastructure Specialist targets engineers, architects, operations teams, and service providers responsible for deploying, operating, and troubleshooting AI workloads on Cisco data center infrastructure at scale. Cisco recommends an “AI Solutions on Cisco Infrastructure Essentials” learning path on Cisco U. ahead of that specialist exam, underscoring how deeply AI is being woven into the traditional infrastructure curriculum. Cisco has also publicly framed infrastructure, trust, and model development as the three main AI challenges, emphasizing the need for robust networks, secure data handling, and safe AI adoption. AITECH addresses two of those pillars directly—operationalizing AI on modern infrastructure and building trusted, governed AI workflows—while the infrastructure specialist certification doubles down on the hardware and platform side. ## What AITECH signals for Cisco For Cisco as a company, AITECH is strategically important for several reasons. First, it reinforces Cisco’s story that its value in the AI era goes beyond hardware speeds and feeds to include skills, platforms, and end-to-end solutions. By building AI into its certification stack, Cisco is training an ecosystem of practitioners who are comfortable using AI-powered tooling across networking, security, collaboration, and observability products. Second, it helps Cisco make good on its AI-centric messaging around customer experience and secure networking in the AI era. Cisco has been clear that it wants to centralize customer experience around AI and position its portfolio as a foundation for AI-driven operations. Having a formally trained practitioner base is essential to delivering on that promise in the field. If Cisco is going to be “critical infrastructure for the AI era,” the people who work with that technology need the skills to deploy and operate it. Third, it creates a new entry point into the Cisco learning universe at a time when many early-in-career professionals are more attracted to AI roles than classic infrastructure tracks. AITECH offers those candidates a way into AI-adjacent roles that still leverage Cisco’s platforms, effectively future-proofing the relevance of Cisco certifications in a market that is rapidly reskilling around AI. ## What IT pros should watch for next For IT leaders and practitioners, the emergence of AITECH and the broader AI Infrastructure track is a sign to start thinking about AI skills as part of your core certification strategy, not a side project. Here are a few practical implications: * Expect AI literacy to become table stakes in job descriptions for network, data center, and operations roles, with certifications like AITECH cited as proof points. * Plan for AI-augmented workflows—code, analysis, troubleshooting—to become the norm, meaning teams without applied AI skills will move slower and deliver less value. * Anticipate vendor stacks, including Cisco’s, to increasingly bundle AI capabilities into infrastructure and management platforms, making practitioner-level AI skills essential to unlock full value. In other words, Cisco’s AI Technical Practitioner certification is less about creating a new niche specialist and more about redefining what it means to be a “technical practitioner” in the first place. For this audience, that makes AITECH worth watching—not just as another logo on a résumé, but as an indicator of where infrastructure careers, and Cisco’s strategy, are headed next.
24.02.2026 20:08 — 👍 0    🔁 0    💬 0    📌 0
Preview
HPE’s latest Juniper routers target large‑scale AI fabrics HPE is taking the wraps off two new router families aimed at helping customers build the infrastructure needed to support large‑scale AI fabrics, distributed cloud environments, and data center interconnects (DCI). At next week’s Mobile World Congress 2026, the vendor will introduce the additions to its core Juniper PTX Series routers: the high-end Juniper PTX12000 family, and the compact Juniper PTX10002 family targeted at data center edge and metro aggregation deployments. All are built on the Juniper Express 5 ASIC, which promises 49% better power efficiency than previous generations, according to Julius Francis, head of product marketing and strategy at HPE. “AI is rewriting networks, and AI adoption is driving enterprise network traffic to AI data centers, and this impacts hyperscalers, neoclouds, and service partners who are building the infrastructure for these applications,” Francis told _Network World_. “Ultra-low latency and ultra reliability is not nice to have anymore. It’s required.” New traffic patterns are coming into the mix, and they’re a lot more symmetric and more bursty, Francis said. Traditional networks that rely on oversubscription won’t work anymore, and that’s creating a lot of networking challenges. “The one main point is the fact that AI traffic, especially DCI traffic, is exploding,” Francis said. “AI factories and inferencing all need to be synced up all the time. So, it’s a great inflection point,” he said, as network operators consider what kind of networking platform they need to build and what type of architecture the AI network will require. To that end, the new PTX12000 family features high-throughput efficiency, enables high-radix architecture, and provides deep buffering. Specifically, HPE says the PTX12000 800GbE family includes: * The 8-slot, 22RU router Model PTX12008 offering up to 345.6 Tbps total bandwidth and the PTX12012, a 12-slot, 32RU router offering up to 518.4 Tbps total bandwidth. * High-radix 43.2 Tbps line card, supporting 54 × 800GbE ports to deliver the throughput required for large‑scale AI traffic flows * Full 800GbE ZR/ZR+ coherent optics support across all ports, with QSFP‑DD and OSFP flexibility * Deep buffering to handle AI traffic bursts and maintain lossless performance * Built-in security including line-rate MACsec encryption, distributed-denial-of-service (DDoS) protection, and hardware-based integrity protection * Future support for 1.6T bandwidth, with modular power and advanced cooling to support multiple generations of line cards The Juniper PTX10002 line of fixed-form 2U, 800GbE, 28.8 Tbps capacity routers is aimed at a variety of network roles, including peering, data center interconnect and data center edge, metro aggregation, and AI data center networking, according to Francis. The three new models give customers several options for configurations and throughput capacity, but they all share support for the same deep buffers, security, and optics for AI network fabric buildouts, Francis said. In addition to the new hardware, HPE added new AI support, including a Model Context Protocol (MCP) server, to the Juniper Routing Director to help customers build, configure, and optimize networks, Francis said. The Routing Director is the vendor’s routing automation and traffic engineering platform. Juniper Routing Director provides structured, real-time context from across the WAN, HPE says, and it enables agentic AI, including a MCP server, to expose data and actions in a model-friendly way. “The result? With natural language, an AI assistant can go beyond analysis—it can act (with the right permissions) to orchestrate changes, validate configurations, run active tests, optimize services, and even help manage security patch workflows,” HPE wrote in a blog post about the enhancement.
24.02.2026 18:49 — 👍 0    🔁 0    💬 0    📌 0
Preview
New Relic connects observability platform to business outcomes New Relic this week updated its observability platform with new capabilities designed to tie application health and performance directly to business outcomes. The updates will help enterprises that are embedding AI into revenue-generating applications and using code generation tools in their software development processes, New Relic says. Brian Emerson, New Relic’s chief product officer, says there are three forces reshaping enterprise observability: AI-assisted development, the application explosion that those tools will produce, and the unpredictability introduced when AI is embedded directly into customer-facing workloads. “When you start embedding AI into applications, non-deterministic things start happening,” Emerson says. “Failures are kind of silent. You need to understand behavior patterns; that’s a very different world than ‘are things red, yellow, or green.’” For instance, a new feature called Intelligent Workloads can automate the discovery and mapping of application dependencies to create a “360-degree view” of performance, infrastructure, and end-user impact. Intelligent Workloads will communicate key performance indicators (KPI) such as abandoned carts and conversion rates rather than simple green or red technical indicators. “When your system is slow, is it getting worse, or is that fine?” says Nic Benders, chief technology strategist at New Relic. “Sure, your CPU is high. That’s an infrastructure concern, but what is the impact on whatever it is that you exist to do? That is your number one concern, and we should be speaking to you about that.” With this release, New Relic also introduced a site reliability engineering (SRE) agent that uses telemetry data such as metrics, events, logs, traces, and node relationships to automate root-cause analysis and prioritize alerts to engineers. New Relic envisions a “digital war room” in which AI agents orchestrate incident response across network, database, and application domains, while network engineers review and approve recommended actions rather than doing the initial triage. New Relic’s 2025 Observability Forecast found that the majority of organizations are aware of the importance of resolving issues before they impact performance, and the use of AI monitoring capabilities grew from 42% in 2024 to 54% in 2025. Industry watchers believe that vision will take some time to become a reality across enterprise organizations. “Every organization is a snowflake in its adoption curve and readiness timeline,” says Stephen Elliot, global group vice president at IDC. “IT behavioral change is one of the most underreported requirements for agentic AI adoption. Trust is the required ingredient.” New Relic also expanded its Digital Experience Monitoring suite to support micro frontend (MFE) architectures, where web applications are broken into smaller, team-managed components. Engineers can now monitor every component and collect metrics on performance timing, errors, renders, and lifecycle methods to trace how dependencies affect the end-user experience. Separate agentic AI monitoring capabilities add a service map of agent-to-agent interactions and drill-down traces for individual agents and tools, which New Relic says will address a visibility gap as multi-agent deployments grow. IDC’s Elliot says the business-outcome framing is an industry-wide trend, but that New Relic’s extension of digital experience management into revenue intelligence is meaningful. “Every vendor needs to communicate value in both technology and business terms,” he explains. “One is no longer enough.” Elliott also says New Relic’s hybrid OpenTelemetry approach, which lets customers use OTEL instrumentation without separate collector infrastructure, is increasingly table stakes for enterprise buyers. “OTEL is here to stay, and its adoption continues to increase. It is increasingly a product requirement to support as more enterprises make it part of their observability strategies,” Elliot says. Intelligent Workloads is available as a preview for users of New Relic’s transaction monitoring solution, Transaction 360. The remaining capabilities are available in preview to all New Relic platform users.
24.02.2026 16:28 — 👍 0    🔁 0    💬 0    📌 0
Preview
Nvidia lines up partners to boost security for industrial operations Nvidia has extended its collaborations with a handful of security vendors in an effort to improve real-time threat detection and response across operational technology (OT) environments and industrial control systems (ICS). “Many of these systems were originally designed for reliability and longevity, not for today’s threat techniques,” wrote Itay Ozery, director of product marketing for networking at Nvidia, in a blog post about the news. Nvidia’s collaborations with Akamai, Forescout, Palo Alto Networks, Siemens and Xage Security are aimed at bringing accelerated computing and AI to OT cybersecurity, according to Ozery. “These efforts represent a fundamental shift in OT and ICS cybersecurity, where security is embedded into and distributed across infrastructure, enforced at the edge and coordinated through centralized, AI-driven intelligence, bringing modern cybersecurity to the systems that keep the physical world running,” he wrote. Nvidia announced the news at the S4x26 OT and ICS security conference. The joint efforts will use Nvidia’s BlueField DPUs, which can directly handle security workloads, offloading those tasks from local host CPUs, and can enforce identity-based access, micro-segmentation and workload policies, according to Nvidia. Here are some details of each collaborator’s plans to protect critical infrastructure: #### Akamai extends its micro-segmentation and zero-trust security platform Guardicore to run on Nvidia BlueField GPUs The integration offloads user-configurable security processes from the host system to the Nvidia BlueField DPU and enables zero-trust segmentation without requiring software agents on fragile or legacy systems, according to Akamai. Organizations can implement this hardware-isolated, “agentless” security approach to help align with regulatory requirements and lower their risk profile for cyber insurance. “It delivers deep, out-of-band visibility across systems, networks, and applications without disrupting operations. Security policies can be enforced in real time and are capable of creating a strong protective boundary around critical operational systems. The result is trusted insight into operational activity and improved overall cyber resilience,” according to Akamai. #### Forescout works with Nvidia to bring zero-trust technology to OT networks Forescout applies network segmentation to contain lateral movement and enforce zero-trust controls. The technology would be further integrated into partnership work already being done by the two companies. By running Forescout’s on-premises sensor directly on the Nvidia BlueField, part of Nvidia Cybersecurity AI platform, customers can offload intensive computing tasks, such as deep packet inspections. This speeds up data processing, enhances asset intelligence, and improves real-time monitoring, providing security teams with the insights needed to stay ahead of emerging threats, according to Forescout. #### Palo Alto to demo Prisma AIRS AI Runtime Security on Nvidia BlueField DPU Palo Alto Networks recently partnered with Nvidia to run its Prisma AI-powered Radio Security(AIRs) package on the Nvidia BlueField DPU and will show off the technology at the conference. The technology is part of the Nvidia Enterprise AI Factory validated design and can offer real-time security protection for industrial network settings. “Prisma AIRS AI Runtime Security delivers deep visibility into industrial traffic and continuous monitoring for abnormal behavior. By running these security services on Nvidia BlueField, inspection and enforcement happen directly at the infrastructure level, closer to the workloads,” Palo Alto stated. #### Siemens to demo Nvidia BlueField integration with its IT/OT platform At the S4x26 security conference, Siemens said it will demonstrate its AI-ready Industrial Automation DataCenter, a platform that collects, stores, processes, and serves operational and automation data across industrial environments. The system integrates Nvidia BlueField devices for and integrated secure edge infrastructure made up of industrial OT workloads and the data center or cloud, according to Siemens. #### Xage links its Fabric Platform with Nvidia BlueField In another technology demonstration, Xage said it will show how its distributed, identity-based security system, Xage Fabric Platform, operates with Nvidia BlueField devices to help customers protect energy assets, manage third-party access and secure AI-driven operations. “Xage applies least-privilege controls at every step of these interactions, governing not only which agents can access specific data, pipelines, or models, but also the exact actions agents can perform—and for how long,” the company wrote on its website. “With role-based segmentation running at line speed on BlueField, organizations can prevent unauthorized privilege escalation and data leakage and enforce policy-based privilege deescalation to block risky actions, ensuring that AI agents remain trustworthy and compliant as they scale and evolve.”
24.02.2026 01:51 — 👍 0    🔁 0    💬 0    📌 0
Preview
Pure Storage becomes Everpure, acquires 1touch Pure Storage has changed its name to Everpure and bought data classification company 1touch, the company announced on Monday. It’s moving to become not just a storage company but also a data management company that aims to help its enterprise customers make data usable for AI. The vendor now known as Everpure is a significant player in the enterprise storage market and has racked up acknowledgements from analyst firms. Gartner ranked it ahead of competitors in the “leader” section of its enterprise storage market analysis, and in GigaOm’s December report on primary storage, it received top rankings for key features and tied for second place on emerging features. According to IDC, it was the fastest-growing enterprise storage company last year, showing a 15.5% increase from 2024, which helped it pull ahead of HPE to become the fourth-largest company in the space after Dell, Huawei, and NetApp. “We are not going to stop doing storage,” says Prakash Darji, general manager of the digital experience business unit at Everpure, in explaining the name change. “Make sure that comes across. But we’ve been introducing capabilities in data management.” Yet customers still thought of it as a storage company. “The name was constraining,” Darji says. “We found it limiting in the category expansion where we were going.” Everpure is meant to combine the brand-name recognition of “Pure” with the “Ever” from its Evergreen subscription storage product line. ## Good data is key to good AI According to a Boston Consulting Group survey released in September, 68% of 1,250 senior AI decision makers said the lack of access to high-quality data is a key challenge when it comes to adopting AI. Other recent research confirms this. In an October Cisco survey of over 8,000 AI leaders, only 35% of companies have clean, centralized data with real-time integration for AI agents. And by 2027, according to IDC, companies that don’t prioritize high-quality, AI-ready data will struggle scaling gen AI and agentic solutions, resulting in a 15% productivity loss. “Every enterprise is talking about AI, but most aren’t AI ready because their data is fragmented and poorly cataloged,” says Brad Gastwirth, global head of research and market intelligence at Circular Technology, a supply chain consultancy. “If Everpure can help turn storage into a structured, intelligent data foundation, that could materially shorten the path from proof of concept to production AI.” It’s not an easy process. It could take years to shift from being viewed primarily as a storage hardware company to a data platform company, Gastwirth says. “There is product integration to get right, but there is also a commercial shift. Sales teams need to sell differently, customers need to budget differently, and the market needs proof points.” And there are many companies in the race to be the data platform for AI. “The difference is where it sits in the stack,” he says. “If Everpure can bake more intelligence directly into the core storage layer instead of layering tools on top, that can actually simplify things.” Putting the control layer closer to the data can be helpful as companies deploy agentic AI. AI agents need to have good access to data to function well, whether as part of their training, in RAG embedding, or via MPC servers. But ensuring that agents only access the data they’re supposed to is a challenge. “The shift to agentic AI is a big reason why you’d want to have your data intelligence tied to your data infrastructure,” says Zeus Kerravala, founder and principal analyst at ZK Research. When Charles Giancarlo took over as Pure Storage CEO back in 2017, he already had a vision for how AI would cause data centers to evolve, Kerravala says. “He thought storage had a bigger role to play in AI, but that it had to change,” he says. “And so everything he’s done has been to evolve the company.” The acquisition of 1touch is a big part of that, Kerravala says. “It brings a lot of intelligence to add to the storage footprint,” he says. That includes data discovery, data classification, and making sure that data is ready for AI. “Pure didn’t really have that capability before.” Everpure’s Darji agrees that data management needs to be closer to the storage in the AI era—and that Everpure is now positioned well to offer just that. Today, the security of a file or object is generally connected to that object and is handled by the security of the storage system, he says. “And if only these agents can see purchase orders, then it’s very relevant to understand that storage security is important for AI security,” Darji says. Then, say, a company also needs to track information about what that file contains, such as the fact that it’s a purchase order for a particular amount—that information would reside in a separate database. “If I store that information outside, agents have to go look at that, then look at the storage, which is highly inefficient,” Darji says. If the classification of the data and other enrichment and context can reside right next to the storage layer, it becomes much more accessible to AI. And while some of 1touch’s competitors are highly specialized—for example, they might just look for personally identifiable information—1touch has a more flexible platform, so enterprises can adapt the classification to their own requirements. There are ready-to-go models for privacy and security, but customers can use other classifiers, including their own large language models. “It’s model-agnostic and pluggable,” Darji says. “You can classify bone breaks in an X-ray, for example. It’s very open from an architectural standpoint.”
24.02.2026 01:07 — 👍 0    🔁 0    💬 0    📌 0
Preview
Favorable Wi-Fi 7 prices won’t be around for long, Dell’Oro Group warns If you’re considering a Wi-Fi 7 upgrade, now’s the time because prices are unusually low, according to the latest research from Dell’Oro Group. A number of factors contributed to establish initial Wi-Fi 7 pricing far lower than normal for a new technology, says Siân Morgan, research director at Dell’Oro Group and lead author of the company’s latest quarterly Wireless LAN 5-Year Forecast Report. When they debuted, the worldwide average selling price (ASP) for Wi-Fi 7 indoor access points was lower than that for Wi-Fi 6 or 6E, Morgan says. “And it has stayed lower over the 10 quarters since it was introduced to the market.” (See graphic, below.) srcset="https://b2b-contenthub.com/wp-content/uploads/2026/02/DellOro-wifi-pricing.png?quality=50&strip=all 977w, https://b2b-contenthub.com/wp-content/uploads/2026/02/DellOro-wifi-pricing.png?resize=300%2C168&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2026/02/DellOro-wifi-pricing.png?resize=768%2C430&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2026/02/DellOro-wifi-pricing.png?resize=150%2C84&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2026/02/DellOro-wifi-pricing.png?resize=854%2C478&quality=50&strip=all 854w, https://b2b-contenthub.com/wp-content/uploads/2026/02/DellOro-wifi-pricing.png?resize=640%2C358&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2026/02/DellOro-wifi-pricing.png?resize=444%2C250&quality=50&strip=all 444w" width="977" height="547" sizes="auto, (max-width: 977px) 100vw, 977px"> Dell’Oro Group One factor in that phenomenon is regional influences. Wi-Fi 7 went to market outside of the U.S. in a way that served to pull prices down globally. Chinese vendors, including Huawei and H3C, were shipping Wi-Fi 7 access points up to a year ahead of their North American competitors, she says. “They were able to capture some of the market, like in CALA [Caribbean and Latin America] and Europe,” Morgan says. “And they pulled the price down low.” By the time North American vendors caught up with shipments, they essentially had no choice but to price aggressively as well – even stalwarts like Cisco. “Cisco had a deliberate strategy of introducing Wi-Fi 7 at similar prices to the older 6 or 6E technology, with very little premium for the new technology, and that has really helped to keep the price low,” she says. Another contributing factor is that some Wi-Fi 7 access points have only two radios, whereas Wi-Fi 6 APs generally have three to support 2.4, 5 and 6 GHz bands, Morgan says. Finally, some vendors offer a wider range of Wi-Fi 7 equipment models than in previous generations. The lower-end models in their portfolios help reduce the average price of all Wi-Fi 7 products, Morgan’s research shows. So, whether you pay a premium for Wi-Fi 7 vs. Wi-Fi 6 or 6E may depend on which models you need. ## Act now, these deals won’t last Whatever your particular case, if you are in the market for a Wi-Fi 7 upgrade, don’t dally. “In the overall wireless LAN market, not just Wi-Fi 7, we’re going to start to see prices rise,” Morgan says. Price hikes will be largely due to the uncertain availability of memory chips required for WLAN hardware – an issue that’s driving price hikes across all sorts of equipment. “Vendors have already started to raise list prices, even though it’s been in the few percentage points so far,” she said. “We expect further price hikes over the next year.” Lead times are also volatile. Channel partners are telling Dell’Oro that lead times can vary day-to-day, measured in months one day and weeks the next. “There doesn’t seem to be a consistent trend across specific products or specific vendors. It seems volatile across the whole market,” Morgan says. As a result, partners are tightening the windows on how long quotes are valid, because they don’t know how or whether their own pricing will change. While there’s no hard-and-fast rule of thumb, and timing may depend on existing contracts, Morgan says the typical window is probably a matter of weeks. ## Plenty of rationale to upgrade In addition to potential deals, Wi-Fi 7 holds plenty of reasons to justify an upgrade, headlined by multi-link operation (MLO). MLO enables devices to use multiple frequency bands simultaneously, to avoid congestion on a single band and increase reliability. One of those bands may be the 6 GHz band, introduced with Wi-Fi 6E. But with 6E, devices must choose a single band rather than use multiple bands simultaneously. The additional band could be a “game changer” for enterprises. The original 2.4 GHz band is typically crowded and subject to interference, as is the 5 GHz band in many cases. “To be able to have devices access this new band that isn’t being used by wireless LAN at all today, or very little, really is going to improve the quality of the connection,” Morgan says Morgan expects AI operations (AIOps) features will likewise be attractive to enterprises because they help reduce the cost of operating and maintaining networks. “Many vendors are offering AI-fueled features to help automate operations,” she says. “I expect 2026 is going to be the year enterprises can really prove in that return on investment on AIOps features.”
23.02.2026 10:30 — 👍 0    🔁 0    💬 0    📌 0
Preview
Raising the temp on liquid cooling Nvidia CEO Jensen Huang announced early this year that the new Vera Rubin processor, which is twice as powerful as the previous Grace Blackwell chip, doesn’t require its cooling water to be at a very cold temperature. In fact, the Vera Rubin can be cooled with water that’s 45 degrees Celsius, which is 113 degrees Fahrenheit. That’s hotter than the recommended settings for a hot tub. And it’s one degree hotter than the peak summer temperature in Las Vegas in 2025. “With 45 degrees Celsius, no water chillers are necessary for data centers,” Huang said in his keynote address at the Consumer Electronics Show. “We’re basically cooling this supercomputer with hot water. It’s so incredibly efficient.” The core benefit of using higher water temperatures is that it fundamentally changes how much mechanical cooling data centers have to use, says Alex Cordovil, research director at Dell’Oro Group. “As supply water temperatures move to 38 degrees Celsius and above, operators can dramatically expand the number of hours where they rely on economization,” he says. In many climates, free coolers can handle much more of the work, with traditional chillers either downsized or used only during peak usage times. “That translates directly into lower energy consumption,” Cordovil says. ## Liquid cooling minus the chillers Liquid cooling for data centers is nothing new. IBM was using water cooling for its System/360 mainframes back in the 1960s. Even hot-water cooling has been around for many years. In 2012, for example, IBM announced the world’s first commercially available hot-water-cooled supercomputer, capable of handling temperatures as high as 45 degrees Celsius (113 Fahrenheit), just like Vera Rubin. In the winter, the hot water was used to heat the buildings in the Leibniz Supercomputing Center campus, saving $1.25 million a year. IBM isn’t the only one. “We’ve been doing liquid cooling since 2012 on our supercomputers,” says Scott Tease, vice president and general manager of AI and high-performance computing at Lenovo’s infrastructure solutions group. “And we’ve been improving it ever since—we’re now on the sixth generation of that technology.” And the liquid Lenovo uses in its Neptune liquid cooling solution is warm water. Or, more precisely, hot water: 45 degrees Celsius. And when the water leaves the servers, it’s even hotter, Tease says. “I don’t have to chill that water, even if I’m in a hot climate,” he says. Even at high temperatures, the water still provides enough cooling to the chips that it has real value. “Generally, a data center will use evaporation to chill water down,” Tease adds. “Since we don’t have to chill the water, we don’t have to use evaporation. That’s huge amounts of savings on the water. For us, it’s almost like a perfect solution. It delivers the highest performance possible, the highest density possible, the lowest power consumption. So, it’s the most sustainable solution possible.” So, how is the water cooled down? It gets piped up to the roof, Tease says, where there are giant radiators with massive amounts of surface area. The heat radiates away, and then all the water flows right back to the servers again. Though not always. The hot water can also be used to, say, heat campus or community swimming pools. “We have data centers in the Nordics who are giving the heat to the local communities’ water systems,” Tease says. ## Hot goes mainstream While the technology has been around for years, it’s only lately, with the push for more AI compute, that hot-water cooling is moving into the spotlight—and into the mainstream. “It’s now imperative,” says Arthur Hu, senior vice president, global CIO, and chief delivery and technology officer at Lenovo’s services and solutions group. And the AI factory that Nvidia’s Huang talked about is only possible with hot-water cooling. Hu said that data centers deploying hot-water cooling usually do it in brand-new facilities. It’s possible to do it for just a part of a data center or a rack, he says, but it’s not usually a good fit because of power requirements. “When you’re doing it at scale, the issue has to do with power density,” he says. “With the next generation, we’re starting to get into power densities that are 20 to 30 times higher. So, it doesn’t make sense to just use it in a corner of a traditional facility because they won’t have the power.” According to the Uptime Institute, top-of-the-line processors can now be effectively cooled at 40 degrees Celsius, and now more direct liquid cooling providers are stepping up. Data center cooling vendor Accelsius, for example, has a two-phase system that uses a dielectric liquid with a low boiling point to cool servers instead of water. “We perform six to eight degrees better than single-phase water,” says Accelsius CTO Rich Bonner. “You might be able to run at 51 to 55 degrees Celsius, enabling even greater energy efficiencies.” One challenge of running at 45 degrees is that it’s in the microbial growth range, he says. “When you get to 55, 60 degrees Celsius, you’re hot enough that it kills all of that, ” he says. Plus, with water, there are corrosion issues, he adds. The way that the two-phase systems work is that the fluid is pumped to the chip, where it starts to boil because of its lower boiling temperature. The bubbling creates turbulence, which increases heat transfer, making the entire system more efficient than water. “We use four- to nine-times less liquid flow rate to the chip,” says Bonner. The trick is that the vapor isn’t any hotter than the liquid was—the heat goes into the phase change, not into making the liquid hotter. So instead of cooling down the fluid once it leaves the server, it’s condensed back to a liquid state. Several other vendors are also in the game: * In mid-2024, LiquidStack announced the availability of its CDU-1MW coolant distribution unit, which supports temperatures of up to 45 degrees Celsius. * SuperMicro announced in early 2025 that its direct liquid cooling solution would now support temperatures of up to 45 degrees Celsius. * Vertiv’s CoolLoop Trim Cooler, announced last March, supports water temperatures up to 40 degrees Celsius and cold plates of 45 degrees Celsius. * Schneider Electric’s Motivair also supports liquid cooling of 45 degrees Celsius, and even higher. ## The benefits of liquid cooling AI factories can draw hundreds of megawatts of power, but 30% of it is lost to power conversion, distribution, and cooling, according to Nvidia. At a temperature of 45 degrees, data centers can cool their water with just ambient air, compared to 35-degree alternatives, which can be more costly and complex. According to McKinsey, liquid-cooling systems can have higher upfront costs but save money in the long term. Direct-to-chip cooling systems, for example, use 31% less power, McKinsey says, than traditional air. Break-even points for liquid cooling systems are between one and three years, depending on local electricity costs. Liquid cooling systems are also quieter, take up less physical space, and allow for higher server densities in data centers. Higher temperatures also allow for closed-loop systems. Instead of cooling down water by letting it evaporate, the water can flow through a set of radiator pipes on the roof of the data center, dissipating just enough heat into the air to become useful again—even in warm climates. This significantly reduces a data center’s water consumption. According to research released by Dell’Oro Group, the liquid cooling market nearly doubled in 2025, reaching close to $3 billion in revenue, and will grow to nearly $7 billion by 2029. Air cooling remains predominant, but liquid cooling is gaining ground, according to a survey conducted by S&P Global Market Intelligence 451 Research. Today, 45% of data centers are only cooled by air, down from 48% in 2024. Meanwhile, 42% of respondents say they’re using a combination of air and liquid cooling, and 12% are fully liquid cooled. In addition, 59% of respondents say they plan to implement liquid cooling in the next five years, with 21% intending to do so in the next 12 months. The majority of respondents said that the benefit of liquid cooling is that it allows for increased server power and higher rack density. Other benefits cited include better power usage, better total cost of ownership, and quieter operations. But there are downsides as well. According to the S&P survey, there are reasons not to rush to upgrade. For example, the high cost of installation and maintenance was cited by 56% of respondents as a barrier to adoption. In addition, 53% say that air cooling is still adequate for most cooling needs. There’s also a lack of standardization for connecting components, say 29%, and 29% also cited a shortage of skilled personnel. ## Hot liquid’s downsides So liquid cooling in general isn’t a magic bullet for data centers. And that is even more true for liquids at 45 degrees Celsius and above. “There are no downsides in terms of ultimate performance and sustainability,” says Accelsius’ Bonner. “But there are challenges.” For example, if all the water in a facility is at 45 degrees Celsius (113 Fahrenheit) then the whole data center is going to be at that temperature. “That can be very hot for operators,” he says. “You might need to cool the air just for the people to work in it.” Surfaces can be hot to the touch, Bonner says. And just because the latest chips can run at high temperatures and be liquid cooled, that doesn’t mean the same holds true for all the other equipment in the data center. Finally, all those radiators on the roof can take up space. A lot of space, he says. As a result, Bonner says, most hyperscalers and other large operators still have chillers in their data centers and are running their water at between 30 and 35 degrees Celsius. After Huang’s CES keynote address, shares of several cooling technology companies immediately fell, according to Morningstar, impacting Johnson Controls, Modine Manufacturing, Trane Technologies, and Carrier Global. But they recovered quickly. “It’s great that Nvidia is putting this out there, and I suspect that there will be takers,” Bonner says. “There is a trend where the temperature is going upward over time. Every generation, there’s a couple of degrees increase.” Today, only a minority of data center workloads use high water temperatures in their water-cooled racks, says Dell’Oro’s Cordovil, and that’s mostly for AI workloads. “But we expect them to be the majority for liquid-cooled racks from 2027,” he adds.
23.02.2026 10:00 — 👍 0    🔁 0    💬 0    📌 0
Preview
Cisco and AT&T partner for 5G IoT services Cisco and AT&T have expanded their partnership to offer IoT and private network services via a dedicated 5G backbone that supports ultra-low latency, high speeds, security, and simplified management. Specifically, Cisco will integrate its Mobility Services Platform, IoT Control Center, and Converged Core products with AT&T’s 5G Standalone service. The AT&T service, like other standalone offerings from Verizon or T-Mobile, is what the carrier calls a true 5G core, meaning it doesn’t rely on older 4G LTE infrastructure to deliver 5G services. It includes features such as the ability to support dedicated bandwidth to specific application workloads and to identify traffic types to let businesses apply policies more easily. Cisco’s Mobility Services platform includes a full-stack, cloud-native converged core network and distributed edge support. It’s designed to simplify how service providers and businesses build, manage, and deliver new mobile services globally, Cisco says. The integrated AT&T/Cisco service lets customers buy dedicated virtual slices to guarantee bandwidth and latency, for example. Customers can use this feature to separate traffic for critical IoT workloads from other traffic to ensure delivery, Cisco says. In addition, the service will enable enterprises to support IoT operations and applications with optimized local performance as well as to more efficiently support IoT lifecycle management, diagnostics, and automation, according to Cisco. Other carriers—including Verizon, T-Mobile, and Vodaphone—also offer 5G standalone services for enterprise applications such as IoT, edge computing, and industrial automation . Cisco and AT&T have partnered in the past to provide customers with SASE services, 5G Fixed Wireless Access (FWA) gateways, and other business connectivity products.
20.02.2026 18:28 — 👍 0    🔁 0    💬 0    📌 0
Preview
Meta scoops up more of Nvidia’s AI chip output AI’s insatiable demand for chips has already had an effect on the IT market, and it could be about to get worse: Nvidia has entered into a multi-year strategic partnership with Meta to fill the social network’s new AI data centers with its cutting-edge processors. Nvidia will supply Meta with millions of its Blackwell and Rubin GPUs, and Meta will integrate Nvidia Spectrum-X Ethernet switches into its Facebook Open Switching System platform. ##### ** Related:**[**Nvidia GTC 2025: News and insights****]** Meta will also expand its use of Nvidia’s Grace chips in what Nvidia described as “the first large-scale Grace-only deployment.” Grace CPUs are usually paired with Nvidia’s Blackwell GPUs. “No one deploys AI at Meta’s scale,” Nvidia CEO Jensen Huang said in a news release. Meta plans capital expenditure, mostly on data centers and the computing infrastructure they contain, of $115 billion-$135 billion this year — more than some hyperscalers, which rent their computing capacity to others. Meta is keeping it all for itself. This could be bad news for other enterprises, as the demands of the hyperscalers and big customers like Meta is leading to a decrease in the availability of chips to train and run AI models. IDC is predicting that the broader AI-driven chip shortage will have a significant effect on the IT market over the next two years as companies struggle to replace everything from laptops to servers. In particular, businesses looking for Nvidia processors may be forced to look elsewhere. #### **More Nvidia news:** * Reports of Nvidia/OpenAI deal in jeopardy are overblown, says Nvidia’s CEO * Eying AI factories, Nvidia buys bigger stake in CoreWeave * China clears Nvidia H200 sales to tech giants, reshaping AI data center plans * Nvidia is still working with suppliers on RAM chips for Rubin * RISC-V chip designer SiFive integrates Nvidia NVLink Fusion to power AI data centers * Nvidia H200 chips in China: US says yes, China says no * Lenovo-Nvidia partnership targets faster AI infrastructure rollouts * Top 10 Nvidia stories of 2025 – From the data center to the AI factory * HPE loads up AI networking portfolio, strengthens Nvidia, AMD partnerships * Nvidia’s $2B Synopsys stake tests independence of open AI interconnect standard * Nvidia bets on open infrastructure for the agentic AI era with Nemotron 3 * Nvidia moves deeper into AI infrastructure with SchedMD acquisition * Nvidia chips sold out? Cut back on AI plans, or look elsewhere
20.02.2026 17:00 — 👍 0    🔁 0    💬 0    📌 0
Preview
Arrcus targets AI inference bottleneck with policy-aware network fabric As AI usage continues to scale, there is a distinct type of application traffic that is having an impact on networking. Training isn’t the issue, it’s inference. Training runs in centralized clusters on predictable schedules. Inference is distributed, latency-sensitive and subject to real-time constraints around power availability, data sovereignty, and cost. The network fabric that is routing that traffic is increasingly the bottleneck, and traditional hardware-defined networking was not built to handle it. That is the problem Arrcus is moving to address. The San Jose-based networking software company has spent a decade building ArcOS, a network operating system designed to decouple routing and switching workloads from proprietary hardware. The company sells into data center, telco, and enterprise markets, running in production across thousands of network nodes globally. This week, Arrcus reported threefold bookings growth in 2025 and announced the Arrcus Inference Network Fabric (AINF), a product built to dynamically steer AI inference traffic across distributed infrastructure. “To enhance agentic AI adoption by improving response times, networks need to become AI-aware,” Shekar Ayyar, chairman and CEO of Arrcus, told _Network World_. ## How ArcOS differs from SONiC and NSX Understanding what Arrcus is doing with AINF requires understanding what ArcOS actually is, and where it sits relative to other networking technologies like SONiC or VMware’s NSX. SONiC is a switching-focused operating environment suited to operators that want to scale out data center capacity with straightforward packet forwarding. NSX operates at the virtualization layer as a network overlay for compute environments. ArcOS works at Layer 3 and is designed for policy-rich routing use cases: 5G network slicing for carriers, data center interconnects, and environments where programmable traffic steering matters. SoftBank’s deployment of Arrcus for SRv6 mobile user plane is one publicly disclosed example. “Switching is essentially a simpler operation. You just kind of send a packet or not,” Ayyar explained. “Routing is a more complex operation. You tell the packet where to go and what to do. You have a lot more richness and policy in what you do on the routing front.” That policy-rich routing foundation is what Arrcus is now applying to AI inference. ## The inference problem and how AINF addresses it As AI workloads shift from centralized training to distributed inference, the network faces a different class of demands. Inference nodes are geographically dispersed and must satisfy simultaneous constraints around latency, throughput, power capacity, data residency, and cost. Those constraints vary by location and change in real time, and traditional hardware-defined networking was not designed to handle them dynamically. “These inference nodes are now going to become super critical in understanding exactly what the constraints are at those inference points,” Ayyar said. “Do you have a power constraint? Do you have a latency constraint? Do you have a throughput constraint? And if you do, how are you going to direct and steer your traffic?” AINF addresses this by introducing a policy abstraction layer that sits between Kubernetes-based orchestration and the underlying silicon. Models expose their requirements via an API interface, disclosing the parameters they need. Those requirements flow down to the routing layer, which steers traffic accordingly. “Think about us as speeding up the process of how all of those requirements find their way to the router, and then instructing the routing node at the appropriate location in this giant web of networking nodes to do the right thing so that it satisfies the inference policy,” Ayyar said. Operators define business policies including latency targets, data sovereignty boundaries, model preferences, and power constraints. AINF evaluates those conditions in real time and steers inference traffic to the optimal node or cache. Components include query-based inference routing with policy management, interconnect routers, and edge networking. The system integrates with vLLM, SGLang, and Triton inference frameworks. Prefix awareness is used to optimize KV cache usage and help inferencing applications meet service-level objectives for throughput, latency, data sovereignty, power, and cost. ## Challenges and outlook Ayyar identified two near-term obstacles to adoption. The first is awareness. He noted that many potential customers have been designing inference architectures without accounting for policy-aware fabrics as an option. The second is incumbent lock-in, with Cisco and Juniper shops needing assurance that AINF can interoperate cleanly alongside existing infrastructure. Ayyar said Arrcus has invested heavily in interoperability testing to address this. Arrcus is projecting to cross $100 million in bookings in 2026, a target set before any contribution from AINF. The company plans to demonstrate the product at MWC Barcelona and Nvidia GTC in San Jose. “All the talk we’re seeing about AI and the infrastructure related to AI is mostly the tip of the iceberg,” Ayyar said. “What people are not appreciating yet is what is underneath the water, where we believe the efficiency gains as well as the effectiveness gains are hidden and lurking underneath. As soon as that comes to light, it’s almost like throwing X-ray vision on top of this and saying, look, this is where the world is headed. Begin now.”
20.02.2026 14:41 — 👍 0    🔁 0    💬 0    📌 0
Preview
Western Digital wants to ramp-up hard disk drive speeds At its recent Innovation Day 2026 event, Western Digital previewed two technologies in development to increase HDD throughput: high bandwidth drive technology (HBDT) and dual pivot technology (DPT). HBDT enables simultaneous reading and writing from multiple heads on multiple tracks, delivering up twice the bandwidth of conventional HDDs without power penalties. DPT adds a second set of independently operating actuators on a separate pivot and will deliver up to twice the sequential IO gain within a 3.5-inch drive. There have been dual actuator designs in the past, but they sacrificed capacity and required extensive customer software changes. DPT enables reduced spacing between disks, allowing for more platters per drive and higher overall capacity. Western Digital’s ultimate aim is to combine HBDT and DPT into a single drive, which could deliver up to four times the bandwidth of current HDDs, which would be around 1.2GB/s, and without using more power than a standard HDD. “Combined in the same drive, two-track HBDT plus dual pivot is projected to increase throughput from today’s 300MB/s to approximately 1.2GB/s, a 4x increase, while preserving HDD economics. This restores throughput-per-terabyte parity as capacities scale, helping ensure future 100TB HDDs behave like today’s 26TB drives from an access perspective,” wrote Reed Martin, a senior project manager with Western Digital in a blog about the news. “This would give the theoretical 100TB HDDs of the future a throughput/TB equivalent to today’s 26TB HDDs.” The caveat is that this is only applicable to a SATA-based hard disk. A SATA-based drive, whether it is hard disk or in an SSD, maxes out at about 550MB/s of throughput. That’s the great equalizer between hard disk and flash drives: the ancient (in technological terms) SATA port. Most enterprises are not using SATA drives, at least not with hot data. Perhaps cold storage but not frequently accessed data. They are using PCI Express based drives and those are considerably faster than anything Western Digital can engineer in a hard disk. Capacity aside, Western Digital is also aiming for much, much higher capacity. WD is working on developing a 100TB hard drive for the enterprise market based on HAMR (heat-assisted magnetic recording) technology, which are expected to ship by 2029. The company has another technology, ePMR (energy-assisted perpendicular magnetic recording). The company expects to ship 40TB drives by this year and should reach 60TB in a few years as well. Western Digital also announced the expansion of its Platforms business to extend hyperscale storage economics to a broader set of customers. This expansion includes the development of an intelligent software layer, through an open API, expected to launch in 2027, that will enable companies at 200+ petabyte scale to achieve the same storage efficiency and economics that hyperscalers enjoy today. “For the past year, WD has remained continuously focused on execution and accelerating innovation, which has enabled us to truly reimagine the hard drive to meet the requirements of AI,” said Irving Tan, CEO of Western Digital in a statement. “Today, we are showcasing innovation that reflects our deep connection to our customers and how we are meeting demand for capacity, scale, quality, enhanced performance, and ease of technology adoption.”
19.02.2026 18:30 — 👍 0    🔁 0    💬 0    📌 0
Preview
LoRaWAN reaches 125 million devices as industrial IoT expands Connecting battery-powered sensors across large areas has been a persistent challenge for enterprise and industrial IoT deployments. Wi-Fi lacks the range and consumes too much power for long-life sensor applications. Cellular covers the distance but introduces licensing costs and coverage gaps in dense indoor environments. Bluetooth is limited to short range. None of those technologies were designed with massive-scale, low-power IoT as the primary use case. LoRaWAN was. It is an open standard for low-power wide-area networks (LPWANs), built specifically for battery-powered sensors that need to communicate over long distances using unlicensed spectrum. It is managed by the LoRa Alliance, a nonprofit with 360 member organizations, and is ratified by the International Telecommunication Union (ITU). The LoRa Alliance released its 2025 End of Year Report this month, showing the technology has reached 125 million globally deployed devices, growing at a 25% compound annual growth rate. The alliance has certified more than 625 end devices and counts Verizon, AWS and Comcast among its members. Multi-million-device networks are in production with members including Zenner, Actility, Netmore, The Things Industries, and Veolia. Key findings from the report include LoRaWAN taking the lead as the top wireless technology for smart building and facility management, utilities remaining the largest deployment vertical led by smart water, and advancing non-terrestrial network integration with LEO satellite operators. The 2025 specification also added two new data rates to support growing indoor deployment density. The alliance’s ecosystem expanded by 57 new members in 2025 alone. “We are building the fourth pillar of the wireless communication industry,” Alper Yegin, CEO of the LoRa Alliance, told _Network World_. “The other three pillars being Wi-Fi, Bluetooth, cellular. And LoRaWAN is the fourth one, and all these four technologies are very, very complementary.” ## How LoRaWAN works LoRa stands for Long Range. The underlying LoRa radio technology was developed by a French startup called Cycleo, which Semtech acquired in 2012. The LoRaWAN protocol and the LoRa Alliance were both established in 2015. LoRaWAN is built on LoRa, a chirp spread-spectrum physical layer developed by Semtech. The standard supports data rates ranging from approximately 300 bits per second at maximum range to around 5 kilobits per second at shorter distances. Those low data rates are intentional. They directly enable multi-year battery life on end devices and reduce infrastructure costs. In line-of-sight conditions, LoRa signals can reach several hundred miles. Indoors, the signal penetrates multiple floors and walls. A basic indoor gateway costs around $100 and can cover several floors vertically and a city block horizontally. The silicon supply chain is narrow. Yegin identified two primary chip suppliers: Semtech and STMicroelectronics. LoRaWAN was designed from scratch for IoT rather than adapted from an existing wireless standard. Yegin contrasted it with cellular IoT alternatives. “Anything that comes out of 3GPP is only a retrofit on their legacy systems, which will always fall short in terms of price, efficiency and reliability,” he said. ## New data rates for indoor deployments The 2025 specification added two new higher data rates. As indoor deployments grow, the average distance between a device and a gateway shrinks. LoRaWAN’s adaptive data rate mechanism increases throughput as that distance decreases. The new data rates reduce congestion in dense indoor deployments and improve power efficiency for devices operating in close proximity to gateways. Additional data rates are under consideration but not on a defined roadmap. The LoRaWAN market has also expanded over the past year to help support edge computing. Several LoRaWAN device makers have integrated machine learning inference directly on end devices, reducing the amount of data that needs to be transmitted over the network. Yegin noted that Honeywell’s vibration sensors train on the normal operating pattern of rotating machinery. When the sensor detects sufficient deviation from that pattern, it transmits an event notification rather than raw vibration data. Camera-based sensors from vendors including I See Studio process video on-device to count occupants or detect fire, then report only the result over LoRaWAN. “This is even deeper than what people call the edge,” Yegin said. “For them, the edge is like the base station. We take AI all the way down to the device itself.” ## Smart buildings adoption The alliance’s report identifies smart buildings and facility management as the vertical where LoRaWAN now leads among wireless technologies. Yegin said the growth was not the result of the alliance targeting the segment. “The smart buildings market discovered LoRaWAN,” he said. “It happened without us pushing it, really. We haven’t pushed for that, and they just picked it up, and they’re running so fast we’re having a hard time keeping up with it.” AT&T’s deployment is one example. After ending its own IoT product, AT&T launched a LoRaWAN-based facility management product called Connected Spaces without coordinating with or joining the LoRa Alliance. ## Satellite integration is set to grow Terrestrial LoRaWAN networks cannot achieve complete geographic coverage. Yegin cited Swisscom’s nationwide Switzerland deployment, which covers 97.2% of the population but cannot reach remote alpine terrain. Two LoRa Alliance members, Lacuna Space and Plan-S, already operate commercial LoRaWAN services from low Earth orbit. Standard LoRaWAN end devices communicate with those satellites without modification. Target use cases include remote terrain monitoring, linear infrastructure such as oil pipelines and rail lines, open ocean tracking, and border security. European regulators approved satellite-to-low-power device communications in 2025. Additional non-terrestrial network announcements from Alliance members are expected to be revealed at the upcoming Mobile World Congress (MWC) 2026 event in March. ## Adoption challenges and what’s next Despite 10 years of development and 125 million deployed devices, Yegin said awareness remains the primary adoption barrier. He also pointed to a structural challenge that has constrained IoT broadly. A deployable solution requires every link in the chain to work: sensor, network, application, support and resellers. One weak link renders the solution unusable. “The moment people understand what their own problems are and then understand what LoRaWAN can offer, that’s when things start accelerating pretty fast,” he said. On near-term priorities, smart home is the least developed vertical in the alliance’s current portfolio, with announcements in the pipeline. Satellite scale-up is the other focus. Yegin noted that these markets are technically interrelated. When a smart home deploys LoRaWAN, it immediately serves the utility market because meters in that household connect to the same network. Smart city networks in turn complement smart home networks for asset tracking use cases. The longer-term vision is LoRaWAN as a ubiquitous, plug-and-play utility layer. “You just buy a device, you just remove that plastic strip, it just connects,” Yegin said. “You don’t know where it connects. It connects to the network, which is backed by a multitude of networks collaborating, like your home, your neighbor’s home, the utility network, the city, backed by the satellites. That’s the vision.”
19.02.2026 16:06 — 👍 0    🔁 0    💬 0    📌 0