Another dialog test with #Veo3! This time in a #darkfantasy setting with one of the characters as an #undead #witch Converted to #verticals with Luma Labs Reframe and upscale with Topaz Labs #aivideo #aicinema #aifilmmaking #aifilm #fantasy #dnd #lich
The original Veo 3 video.
Using #Veo3 to try and recreate a the footstep flowers effect from Princess Mononoke (without the dying flowers). Then used @lumalabsai Reframe to turn the 16:9 into a #vertical. Upscaled in @topazlabs #aivideo #aifilm #verticals
My first test with Veo 3! Looking into over the shoulder dialog shots and emotional expressiveness. These are two text to vid prompts I cut together. It took a couple of tries to get two clips that paired up well. Having sound come with the video is definitely a game changer! #veo3 #aivideo #aifilm
Interesting how we went from people doing the robot to people acting like NPCs and game avatars.
Really impressed by chatGPT's new extended memory! It's mentioning context and ideas now that I'd normally have to manually feed it. Almost like I was continuing a chat rather than starting a new one. #chatGPT
Making the personalization profile for Midjourney 7 cracks me up. Do you prefer this picture of a pregnant woman or a closeup of a screaming bat? #midjourney
It’s interesting how a no/low code approach leveraged UI to abstract away having to know syntax. But now with LLMs, even though the code is imperfect and it helps to know how to debug and design architecture, I'm finding I want no/low UI with apps now. Is there an API? #aicoding
Just realizing you can access MCP servers in Windsurf! So it’s pretty simple to just switch between LLMs if you’re chatting with an SQL database and you want to see how the different models see things. I was just using Claude desktop but wanted to see if Gemini 2.5 was better. #mcp
Here’s the original image I uploaded for chatGPT to use as a calligraphy reference.
Super impressed by chatGPT’s new image model! Here I’m testing how far I can push the handwriting with a fake drink recipe. For the calligraphy I uploaded a German manuscript from 1765. #aiart #aiartcommunity
Diesel Dreams
Midjourney / LumaLabs / Suno
#aivideo #aifilm #aianime #aiart #midjourney #lumalabs #sunomusic
Would love to hear this in voice!
Been trying to use the Blender MCP server to make some 3D scenes with Claude. Asked it to make a bowl and in trying to delete some existing geometry it ran a factory reset of the entire app. Luckily it was on a fresh Blender install. Be careful if you try it. #mcp
While there’s some assurances on security, I think it’s best to be conservative with what Claude Desktop sees. I’m not analyzing anything overly sensitive. I’ll need to try using a local LLM with MCP down the road but don’t want to overwhelm myself with too much setup. 3/3
I’m still running most of the scripts manually in DBeaver (the SQL server for MCP can’t do everything and I want to be careful about what gets executed). Having Claude read the DB can help with writing tedious SQL. Here I’m linking transactions to tags in a join table. 2/3 #mcp #sql
Learning how to use SQL and to analyze databases with Anthropic’s MCP. I’m new to this DB stuff so it’s been really eye opening. Here I had Claude Desktop make some animated charts from expenses tagged as transportation.🧵1/3 #sql #mcp #aicoding #dataanalysis
While I totally get the value in building apps without reading code, learning to read code/architecture has given me a huge new vocabulary for breaking down problems and understanding systems thinking outside of software. But it’s about as tough as learning a new language. #aicoding
Testing #HedraLabs new lip syncing Character 3 model on some #AImusic I made in #suno Ref images made in #magnifique and also upscaled for the closeups variations. Pretty incredible for just img&audio2vid! Camera zooms/editing/color -grade/grain in post. Upscaled to 4k. #aivideo #aifilmmaking
I want to see a movie like Flow, but with Godzilla. Like forget about the humans.
Haven’t been doing as much with the creative side of AI gen lately because I’ve been so focused on vibe coding and learning to code in general. Its already doing something because when new stuff comes out, using it isn’t nearly as intimidating or difficult to me as it used to be. #aicoding
Initial tests with Wan 2.1 14b text2video running locally in ComfyUI on a 3090. 1st vid was 20 steps and took 30 minutes. 2nd vid was 50 steps and took 80 minutes. Both were 1280x720 and using close to all 24GB of VRAM. Still messing with settings and different models. #aivideo #comfyui
Quick o3-mini deep research prompt asking for all AI releases in the past 7 days that can actually be used by a person or small business (with a powerful consumer GPU). Report includes release dates. Fast way to get up to speed if you’ve been working on other things. #deepresearch #chatGPT
Deep Research available in the regular chatGPT subscription! Nice!
By chat with docs I just mean loading it into context so you can ask questions about it. I could be using the term wrong. But it’s useful if you want to expand or summarize anything in the report.
Been loving this open source version of Deep Research by dzhng and reading the files in Frogmouth. Here I’m just asking what the best Gaussian Splat repos currently are. You can also manually paste the reports into an LLM and chat with it.
This new paper shows people could not tell the difference between the written responses of ChatGPT-4o & expert therapists, and that they preferred ChatGPT's responses.
Effectiveness is not measured. Given that people use LLMs for therapy now, this is an important (and urgent) topic for study.
Perplexity announced their own DeepResearch that includes a free tier and a generous $20/mo tier
People who have tried both are finding the Perplexity version to be on par, perhaps better than OpenAI’s ($200/mo)
Analysis of the Chernobyl drone strike yesterday:
www.perplexity.ai/search/yeste...