http://x.com/i/article/2022760628889489414
14.02.2026 20:03 β π 0 π 0 π¬ 0 π 0@openaibot.bsky.social
A bot that crossposts OpenAI official accounts' tweets Unofficial bot
http://x.com/i/article/2022760628889489414
14.02.2026 20:03 β π 0 π 0 π¬ 0 π 0Quote Tweet: https://x.com/FactoryAI/status/2022399654198153256
https://x.com/FactoryAI/status/2022399654198153256
14.02.2026 19:47 β π 0 π 0 π¬ 0 π 0Download the Codex app and start your own project:
https://openai.com/codex/
β€οΈ
https://valentines-day-with-codex.vercel.app/?refresh=1
Better than flowers π
Thank you, Codex.
βIf youβre a builder, what a time to be alive.β
@steipete breaks down how his workflow changed when you can just build things covering how he prompts, iterates, and ships with Codex.
Full episode drops 2/23.
Quote Tweet: https://twitter.com/i/status/2022390096625078389
research grade intelligence at your fingertips
13.02.2026 20:29 β π 2 π 0 π¬ 0 π 0Some of these generalizations are also amenable to solution via this new AI methodology and will be reported on elsewhere. (3/3)
13.02.2026 19:19 β π 1 π 0 π¬ 1 π 0Showing that a case long thought to be empty actually contains structure sharpens our understanding of the mathematics of the strong force and opens new directions, including extensions to gravity and related amplitude relations. (2/3)
13.02.2026 19:19 β π 1 π 0 π¬ 1 π 0Thank you to our collaborators for their partnership. The preprint is available on arXiv and is being submitted for publication.
We welcome feedback from the community. https://arxiv.org/abs/2602.12176
For decades, one specific gluon interaction (βsingle-minusβ at tree level) was widely treated as having zero amplitude, meaning it was assumed not to occur. (1/2)
13.02.2026 19:19 β π 1 π 0 π¬ 1 π 0GPT-5.2 derived a new result in theoretical physics.
Weβre releasing the result in a preprint with researchers from @the_IAS, @VanderbiltU, @Cambridge_Uni, and @Harvard. It shows that a gluon interaction many physicists expected would not occur can arise under specific conditions. (1/2)
Quote Tweet: https://x.com/dkundel/status/2022094277401358554?s=20
https://x.com/dkundel/status/2022094277401358554?s=20
13.02.2026 01:51 β π 0 π 0 π¬ 0 π 0Quote Tweet: https://x.com/PaulSolt/status/2022094274788307201?s=20
https://x.com/PaulSolt/status/2022094274788307201?s=20
13.02.2026 01:51 β π 0 π 0 π¬ 1 π 0Quote Tweet: https://x.com/instant_db/status/2022060886840774859?s=20
https://x.com/instant_db/status/2022060886840774859?s=20
13.02.2026 01:51 β π 0 π 0 π¬ 1 π 0Quote Tweet: https://x.com/danshipper/status/2022009455773200569?s=20
https://x.com/danshipper/status/2022009455773200569?s=20
13.02.2026 01:51 β π 0 π 0 π¬ 1 π 0Quote Tweet: https://x.com/warpdotdev/status/2022061861659967920?s=46
https://x.com/warpdotdev/status/2022061861659967920?s=46
12.02.2026 22:46 β π 0 π 0 π¬ 1 π 0At @SierraPlatform, Codex helps the team jump straight into building projects with minimal setup.
Codex enables them to explore ideas faster and stay focused on solving problems.
on Wednesday, February 11th, more than 4 billion messages were sent to and answered by ChatGPT. this represents more than 160 billion words spoken to a machine intelligence in one day.
12.02.2026 20:10 β π 0 π 0 π¬ 0 π 0Weβre giving a small group of API customers early access to Codex-Spark to experiment with it in their products, helping us continue optimizing performance beyond Codex.
Weβll expand access to more ChatGPT users and API developers as we add more capacity.
Codex-Spark is currently text-only with a 128k context window.
Weβll introduce more capabilitiesβincluding larger models, longer context lengths, and multimodal input as we learn from our first production deployment of low-latency infrastructure and hardware.
These improvements will roll out across all models in Codex over the next few weeks. (2/2)
12.02.2026 18:08 β π 0 π 0 π¬ 1 π 0Codex will continue to get faster.
Weβve optimized infrastructure on the critical path of the agent by improving response streaming, accelerating session initialization, and rewriting key parts of our inference stack. (1/2)
GPT-5.3-Codex-Spark is the first milestone in our partnership with @cerebras.
It provides a faster tier on the same production stack as our other models, complementing GPUs for workloads where low latency is critical.
https://openai.com/index/introducing-gpt-5-3-codex-spark/
Introducing GPT-5.3-Codex-Spark, our ultra-fast model purpose built for real-time coding.
Weβre rolling it out as a research preview for ChatGPT Pro users in the Codex app, Codex CLI, and IDE extension.
Rolling out today to ChatGPT Pro users in the Codex app, CLI, and IDE extension.
https://openai.com/index/introducing-gpt-5-3-codex-spark/
Tomorrow at 10am PT legacy models (GPT-5, GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini) will be deprecated in ChatGPT.
https://openai.com/index/retiring-gpt-4o-and-older-models/
Quote Tweet: https://twitter.com/i/status/2021704637850472502
Codex is rolling out company-wide at NVIDIA to ~30k engineers.
We partnered closely with their team to deliver cloud-managed admin controls and US-only processing with fail-safes.
Engineers at @harvey use Codex to explore multiple approaches in parallel and converge faster on a solution.
Codex frees up their time for deeper system design and complex decision-making.
Quote Tweet: https://twitter.com/i/status/2021286050623373500
Here's a refresher π
11.02.2026 23:17 β π 0 π 0 π¬ 0 π 0