- The Current ā”ļø
- Posts
- š§µSunday Threads - Kling AI Try On, State of AI Agents, Consistent Character Training
š§µSunday Threads - Kling AI Try On, State of AI Agents, Consistent Character Training
3ļøā£ threads š§µ worth reading into š§
ā¶
Kling AI Try On
Kling AI just dropped AI Try-On.
Now anyone can change outfits on anyone.
8 wild examples:
ā Min Choi (@minchoi)
2:59 PM ā¢ Nov 30, 2024
Now here is an AI tool that (if widely adopted) actually might be worth highlighting as a reason for that expected e-commerce boom a lot of people have been forecasting.
Here are the details:
AI-Powered Try-On: Generates realistic images of users wearing selected clothing, adapting to body type and pose.
Customization: Upload your photo and browse virtual clothing options.
Realistic Results: Advanced AI is supposed to accurately shows how clothes drape and fit.
Simple Process: Just upload a photo and pick clothes to see instant visualizations.
Pretty slick! Weāve heard of this beforeāboth Amazon & Walmart have been working on/teasing similar features earlier this year, albeit limited to certain products. So itās really the realism and capacity for showcasing realistic āfitā for clothes that has people buzzing about this new drop from Kling. Also the fact it seems to be actually pretty easy to use.
However, like most things in the AI space, its application will probably come down to cost. If youāve played around with video generators, you know they arenāt cheap.
For example:
The Runway ML standard plan costs $12 per month and provides 625 credits. Each video generation using Gen-3 Alpha requires 10 credits per second of video. For a 10-second video, which is one of the available export durations, we need 100 credits. To calculate the average cost per video generation:
Average Cost=Monthly Membership Cost Monthly Credits Credits per Video Average Cost= Monthly Credits Credits per Video Monthly Membership Cost
Average Cost=$12(625/100)=$1.92Average Cost=(625/100)$12=$1.92
Therefore, the average cost for generating a 10-second video using Runway ML's Standard plan is $1.92.
Iām not saying itās going to be exactly equivalent with try-on videos, but just think about that for a moment. Who bears the cost? Consumer or vendor?
Some high end vendors might provide virtual try on for freeā¦but if I was a betting man Iād bet pretty heavily on groups of internet trolls running try on videos non-stop just to screw the company before long.
So itās probably going to require a premium subscription for shoppers, and come with a limit at that. For example, Amazon Prime members enjoy 3 free try-ons per week with their Prime membership.
Stock Rising e-Commerce brands and online-only stores | Stock Falling in-person reliant retail |
Either way, one could also wonder if part of the magic of shopping is dispelled by actually seeing the item on yourself first. We are usually buying the dream, right? Despite the rise of inclusive modeling, the more base level of consumer psychology implies that people purchase things that they feel will make them better somehow, or more similar to the people they see wearing them (the models included).
So perhaps seeing oneself wearing the item first could actually end up doing more to restrict purchases, than so fuel them.
In any case, fun thing to play with for us for now, and probably just a cute tool for premium shoppers in the future.
Any thoughts? |
ā·
AI Agent State of Affairs
Here is everything that happened in AI Agents this week š§µ
(save for later)
ā Adam Silverman (Hiring!) šļø (@AtomSilverman)
11:01 PM ā¢ Nov 29, 2024
Much has been made of AI Agents and their incoming effects on software, but separating hype from reality isnāt easy as always.
And when the hype is something as far-out as āagents can automate literally every task for youā itās pretty hard to put a finger on whatās actually useful now. This thread does a great job of highlighting the ways people are building useful agents at this very moment.
The key piece for understanding the hype is here:
18/
āThe big inflection point in building software comes when non techies can do technical things by using AIā - @bindureddyā Adam Silverman (Hiring!) šļø (@AtomSilverman)
11:01 PM ā¢ Nov 29, 2024
Like most things in AI, the underlying potential seems to rely on the idea of what could happen when AI development goes so far as to make building tech products available to non-tech people.
āAnyone can produce a Hollywood movie, you just need an ideaā
āAnyone can make Taylor Swift-quality music [which isnāt that great imo š¤®], you just need an ideaā
āAnyone can write software, you just need an ideaā
It is pretty cool to imagine that kind of world. But we are probably vastly overestimating the amount of time it will take to get to that kind of reality. Sure, everyone seems to have a smartphone now, but ask them to reset the cell tower settings, we probably lose about 90% of the population.
What is clear and present though, is that new possibilities are wide open for those that are able to quickly master these new tools. Imagining the different things that can be built within what our tech is already capable of providing is the first step. Sending a piece of text information from one phone to another was pretty cool. Sending a picture through that same channel, even cooler. Sending series of 24 pictures laced together in each second that play as if real life (videos)? Insane.
But that wasnāt really original, was it?
We were sending text written on pieces of paper through the mail for 100 years before that.
On pieces of parchment delivered by pony riders before that.
And verbal messages conveyed by marathon runners even before that.
So perhaps we should think first about what people will ultimately want to do on a smartphone that talks to you like ChatGPT. Itās probably pretty much what we do already, just done a lot faster.
Stock Rising Software writers, existing builders | Stock Falling Anything thatās slow and frustrating on the internet! |
Whatās your take? |
āø
Flux LORA Consistent Character Training
How to train a FLUX LoRA on an illustrated character sheet to generate realistic, consistent characters.
Workflow below:
ā Heather Cooper (@HBCoop_)
4:18 PM ā¢ Nov 29, 2024
This user shares a nice workflow for generating custom characters with consistency. But at this point, do we even care about custom workflows? We should be expecting most AI video tools to offer this level of consistency. Here are a few that are already announcing similar features within existing toolkits.
KLING: developed by Chinese company Kuaishou, has introduced a groundbreaking feature called "Custom Models" that significantly improves character consistency in generated videos.
Leonardo's Character Reference: This tool enables creators to generate consistent characters across various images and scenes, simplifying workflows in film, TV, fashion, marketing, and storytelling.
Luma's Dream Machine: Available on web and iOS, Dream Machine allows users to create videos with consistent characters by understanding interactions between people, animals, and objects within the physical world.
RenderNet AI: This platform provides advanced control over character design, composition, and style, enabling the generation of consistent characters in both images and videos.
Atlabs: Atlabs offers AI-driven video creation tools that ensure character consistency, enhancing the depth and realism of storytelling across various media formats.
Consistent character generation is one of those things creatives have seemed to be waiting on before really diving into using AI tools. Itās crucial for making AI usable for content creators and creatives for several key reasons:
Storytelling Coherence
When characters maintain consistent appearance and traits across scenes, audiences can:
Form stronger emotional connections
Follow stories without distractions
Become fully immersed in the world
Itās absolutely essential for long-form content like comics and animated series.
Brand Identity and Recognition
For brand and franchise development:
Creates recognizable visual identity
Builds brand loyalty
Enables successful merchandising
Production Efficiency
AI-generated consistent characters streamline creation by:
Reducing manual adjustments
Saving time and resources
Allowing focus on creative aspects
Collaborative Workflows
Benefits for team environments:
Provides unified reference point
Maintains consistency across team members
Simplifies style guide creation
Adaptability and Scalability
Offers flexibility through:
Easy character adaptation
Diverse content creation
Quality-maintained scaling
Enhanced Creative Control
Gives creators better control with:
Fine-tuning capabilities
Character variation options
Cross-style consistency
Consistent character generation is definitely a game-changer for AI content creation. It makes storytelling smoother, helps build brands, and speeds up production. And it's getting better fast. The creators who jump on this tech early will have a serious edge in pumping out quality content at scale.
Letās hear it ā |