OpenAI's latest model, o3, has achieved unprecedented performance on benchmarks like ARC-AGI, sparking debates about the dawn of artificial general intelligence (AGI).
17.01.2025 16:55 β π 6 π 0 π¬ 0 π 0@pocketai.bsky.social
a billion parameters in your pocket
OpenAI's latest model, o3, has achieved unprecedented performance on benchmarks like ARC-AGI, sparking debates about the dawn of artificial general intelligence (AGI).
17.01.2025 16:55 β π 6 π 0 π¬ 0 π 0Iβm excited to introduce Pocket β the app that brings powerful AI to your iPhone, entirely offline. With Pocket, you can run advanced AI models on your device, keeping your data private and secure.
apps.apple.com/de/app/pocke...
thats very cool! but ollama can't be compared with native framework like MLX which use gpu acceleration. thus, comparing the performance would be nonsense
12.01.2025 20:10 β π 1 π 0 π¬ 0 π 0Look no internet at all!
04.01.2025 08:27 β π 14 π 0 π¬ 0 π 0Running AI locally isn't for everyone. It requires:
Hardware Resources: High-end GPUs or specialized accelerators may be needed for performance.
Setup Time: Initial setup and optimization can be time-consuming.
Maintenance: Ongoing updates and troubleshooting are your responsibility.
8. Experimentation and Learning
Hands-On Experience: Hosting AI locally is a great way to learn more about machine learning and neural networks.
Control Over Updates: You can experiment with new architectures or models without waiting for external providers to update their offerings.
7. Independence from Providers
No Vendor Lock-in: By running AI locally, you avoid becoming dependent on a specific provider's ecosystem, which could change pricing, policies, or availability over time.
Local AI bypasses this problem.
6. Transparency
Understandable Behavior: With local AI, you can inspect and modify the model's architecture or weights, giving you insights into its workings.
Open Source Benefits: Many local models are open-source, allowing a deeper understanding of their design and operation.
5. Latency
Reduced Response Times: Running a model locally can minimize the delay caused by sending requests to a server and waiting for a response.
Real-Time Applications: This is especially valuable for applications that require real-time processing, such as voice assistants or robotics.
4. Offline Access
No Internet Dependency: A locally hosted AI can function without an internet connection, making it useful in remote locations or during outages.
3. Customizability
Fine-Tuning: Local models can often be fine-tuned or adjusted to meet specific needs, whereas hosted models are usually static and generalized.
Integration: You have full control over integrating the model into workflows, software, or hardware.
2. Cost Savings
No Subscription Fees: Once you've set up a local model, there are no recurring fees. This can be cheaper in the long run compared to subscription-based services.
Reduced Cloud Costs: For developers or businesses with high usage, local inference eliminates ongoing API or cloud costs.
1. Privacy and Data Security
Local Control: Running AI locally ensures your data doesn't leave your device, reducing concerns about data breaches or third-party access.
Running AI locally offers several advantages over using services like ChatGPT, Claude, or Gemini, depending on your needs, priorities, and constraints. Here are some key reasons:
03.01.2025 14:43 β π 14 π 2 π¬ 4 π 0just use a local llm
03.01.2025 14:05 β π 1 π 0 π¬ 0 π 0Are you using ChatGPT?
03.01.2025 13:59 β π 2 π 0 π¬ 0 π 0Pocket AI is like ChatGPT but it runs locally/offline on your phone to preserve your privacy.
03.01.2025 13:58 β π 10 π 0 π¬ 0 π 0Running Llama 3.2 3B locally on my iPhone 13 Pro at more than 30 tokens per second.
03.01.2025 13:13 β π 8 π 0 π¬ 2 π 0