Pocket AI 's Avatar

Pocket AI

@pocketai.bsky.social

a billion parameters in your pocket

1,876 Followers  |  6,433 Following  |  18 Posts  |  Joined: 03.01.2025  |  1.706

Latest posts by pocketai.bsky.social on Bluesky


OpenAI's latest model, o3, has achieved unprecedented performance on benchmarks like ARC-AGI, sparking debates about the dawn of artificial general intelligence (AGI).

17.01.2025 16:55 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image Post image

I’m excited to introduce Pocket β€” the app that brings powerful AI to your iPhone, entirely offline. With Pocket, you can run advanced AI models on your device, keeping your data private and secure.
apps.apple.com/de/app/pocke...

16.01.2025 17:03 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

thats very cool! but ollama can't be compared with native framework like MLX which use gpu acceleration. thus, comparing the performance would be nonsense

12.01.2025 20:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Look no internet at all!

04.01.2025 08:27 β€” πŸ‘ 14    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Running AI locally isn't for everyone. It requires:
Hardware Resources: High-end GPUs or specialized accelerators may be needed for performance.
Setup Time: Initial setup and optimization can be time-consuming.
Maintenance: Ongoing updates and troubleshooting are your responsibility.

03.01.2025 14:43 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

8. Experimentation and Learning
Hands-On Experience: Hosting AI locally is a great way to learn more about machine learning and neural networks.
Control Over Updates: You can experiment with new architectures or models without waiting for external providers to update their offerings.

03.01.2025 14:43 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0

7. Independence from Providers
No Vendor Lock-in: By running AI locally, you avoid becoming dependent on a specific provider's ecosystem, which could change pricing, policies, or availability over time.
Local AI bypasses this problem.

03.01.2025 14:43 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

6. Transparency
Understandable Behavior: With local AI, you can inspect and modify the model's architecture or weights, giving you insights into its workings.
Open Source Benefits: Many local models are open-source, allowing a deeper understanding of their design and operation.

03.01.2025 14:43 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

5. Latency
Reduced Response Times: Running a model locally can minimize the delay caused by sending requests to a server and waiting for a response.
Real-Time Applications: This is especially valuable for applications that require real-time processing, such as voice assistants or robotics.

03.01.2025 14:43 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

4. Offline Access
No Internet Dependency: A locally hosted AI can function without an internet connection, making it useful in remote locations or during outages.

03.01.2025 14:43 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3. Customizability
Fine-Tuning: Local models can often be fine-tuned or adjusted to meet specific needs, whereas hosted models are usually static and generalized.
Integration: You have full control over integrating the model into workflows, software, or hardware.

03.01.2025 14:43 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2. Cost Savings
No Subscription Fees: Once you've set up a local model, there are no recurring fees. This can be cheaper in the long run compared to subscription-based services.
Reduced Cloud Costs: For developers or businesses with high usage, local inference eliminates ongoing API or cloud costs.

03.01.2025 14:43 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

1. Privacy and Data Security
Local Control: Running AI locally ensures your data doesn't leave your device, reducing concerns about data breaches or third-party access.

03.01.2025 14:43 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Running AI locally offers several advantages over using services like ChatGPT, Claude, or Gemini, depending on your needs, priorities, and constraints. Here are some key reasons:

03.01.2025 14:43 β€” πŸ‘ 14    πŸ” 2    πŸ’¬ 4    πŸ“Œ 0

just use a local llm

03.01.2025 14:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Are you using ChatGPT?

03.01.2025 13:59 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Pocket AI is like ChatGPT but it runs locally/offline on your phone to preserve your privacy.

03.01.2025 13:58 β€” πŸ‘ 10    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Running Llama 3.2 3B locally on my iPhone 13 Pro at more than 30 tokens per second.

03.01.2025 13:13 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

@pocketai is following 17 prominent accounts