profile

AiTuts Newsletter

πŸ‘€ Niji V6 is here (Midjourney Anime Model)

Published 4 months agoΒ β€’Β 5 min read

​

Good morning. Welcome back to Aituts. Do we have some goodies for you today!

In this email:

  • Niji V6 is out and it looks incredible
  • + Midjourney V6 tips: memory and more
  • Comfy Textures: automatically textures 3D models
  • Taiyi: The first bilingual open-source text-to-image model for Chinese & English

HEADLINE

Niji V6 (Midjourney's Anime Mode) is out and it looks incredible

On Monday evening, Midjourney CEO David Holz announced the release of Niji V6 the same way he usually announces new models: completely out of the blue.

The Niji models specialize in anime-style images. Don't be fooled though: they can be used for far more than anime.

Many people prefer the Niji models over the standard Midjourney models for a ton of artistic tasks.

So what's new in Niji?

1/ Very opinionated default style

Holz says that the Niji V6 default style is stronger than in previous models. The default style is quite distinctive because of its very saturated colors.

You can get more artistic, varied results by turning it off with --style raw.

And yes, you can do realistic generations as well:

2/ Improved prompt understanding

Just look at how much both the subject and style understanding improve in this one:

This will let us write longer, more complex prompts, that describe exactly what we want.

3/ Text capabilities

As with the standard Midjourney V6, text is greatly improved in Niji V6. Wrap your text in quotations "like this".

4/ Image Cohesion

The Niji line of models produces some of the best image cohesion I've seen in AI images.

This would be my #1 choice for creating prints and other custom products.

Again, use --style raw to reduce unnecessary details and saturation.

5/ Niji just does it better

Niji V6 produces a lot of styles better than Midjourney V6. Check it out:

Low poly 3D:

Pixels:

Crafts:

So far, reception to Niji V6 has been very positive.

Users have noticed how Midjourney V6's strengths at realism come at the expense of other styles.

That's where Niji V6 comes in: it is fantastic at illustration and painting, and much more versatile that people would expect from an "anime" model.

What are you going to use Niji V6 for?


SPONSOR

A practical AI digest

The problem with most of these AI newsletters is they aren't very practical.

It's another day of "cool, Microsoft released something new that I can't use yet".

That's why I like the AI Tool Report. Every issue comes with:

  • News digest
  • Popular tools
  • ChatGPT and Midjourney prompts

TRY IT

3 more cool things in Midjourney V6

1/ How memory in V6 works

In V5, only the first 15-20 words had a strong influence on the generation, before we ran out of memory.

V6 gives you much more memory. But instead of allocating memory to the words that come first, V6 gives memory to the stronger tokens.

What's a strong token?

The more visually descriptive a word is, the stronger it is. Here's a test for that: does an image pop into your head when you think of the word?

Words related to art, design or culture are generally very 'strong'.

One example:

Here we're using a monologue from The Matrix as a prompt. Apart from "Neo", most of it is useless.

So if we add a strong token like "pepperoni pizza", it will be prioritized. Even if it's at the end.

2/ Getting rid of clutter

A lot of the prompting techniques we inherited from the previous models are useless now.

The left and right image are almost the same:

3/ V6 now supports image editing with Pan, Zoom and Vary (Region)

Pan: Click on any of the 4 arrows under an upscaled image to extend the image in that direction

Zoom: Zoom out of the image.

Vary (Region): Select a region of the image to re-generate with a new prompt.


RESOURCE

Free Book: SDXL Magic

SDXL Magic is a quick and easy handbook on how to get started with SDXL. Featuring:

  • My favorite, easy-to-use SDXL ComfyUI workflow
  • Recommendations for SDXL checkpoint models, LoRAs, upscalers
  • Example prompts for realistic and stylized generations

ROUNDUP

1/ Comfy Textures Release: Texture generation for Unreal Engine, that uses ComfyUI and SDXL to project generated images onto 3D models directly in the Unreal Editor.

2/ Amazon releases Diffuse to Choose, an inpainting model that allows users to place any e-commerce item in any setting or on any person, ensuring coherent blending with realistic lighting and shadows. The code will be released publically soon.

3/ Google Research presents Lumiere. Dubbed a "space-time video diffusion model", it will be capable of text-to-video, image-to-video & video inpainting.

4/ Stability AI releases Stable LM 2 1.6B, 1.6 billion parameter small language model trained on multilingual data in English, Spanish, German, Italian, French, Portuguese, and Dutch. Stable LM 2 can be used with a Stability AI Membership.

5/ Tencent AI Lab releases VideoCrafter2 for text-to-video generation, featuring major improvements in video quality, motion and composition compared to VideoCrafter1

6/ DeforumationQT, a UI for Deforum released: Remember those psychedelic Deforum videos that were all the rage a couple of months ago? This is the first user interface built specifically to make these type of videos.

7/ Taiyi Stable Diffusion XL is the first open-source text-to-image model built for bilingual prompting with Chinese and English (current open-source text-to-image models predominantly support English, with limited bilingual capabilities).

8/ moondream for ComfyUI: moondream describes itself as a "tiny vision language model". You can ask it questions about images like "what is she holding", and the model will respond in natural language. Now available as a ComfyUI node!


And... that's a wrap!

Till next time,

Yubin & crew

1223 Cleveland Ave #200, San Diego, CA 92103
​Unsubscribe Β· Preferences​

AiTuts Newsletter

The most practical AI Newsletter for creative people. 5 minute read, everything you need to know about all that matters

Read more from AiTuts Newsletter

Good morning, this is Aituts. AI is like an onion - it many layers to unpeel and makes many people cry. So let us do the peeling for you. In today’s email: Nightshade: a potent or pointless poison? InstantID: the end of face LoRAs? Game Studio Survey: 50% of game studios are using generative AI HEADLINE Nightshade: a potent or pointless poison? Researchers at the University of Chicago have unveiled Nightshade, a new tool that allows artists to "poison" their images so that AI models cannot be...

4 months agoΒ β€’Β 3 min read

So I want to drop the the "t" from AiTuts because it's cleaner. Do you like it or hate it? eg.: Welcome to Aituts, the only AI newsletter for creative professionals. What we've got today: The first real AI-generated TV show trailer? 5 more really cool things about Midjourney V6 Artists are suing Midjourney HEADLINE The first real AI-generated TV show trailer? The German production company PANTALEON Films has asked freelance studio Storybook to create an AI TV-show trailer. Here's the link to...

4 months agoΒ β€’Β 2 min read

Welcome back to Aituts, your weekly dose of creative AI tuts and news: Today's programming: 3 things to know about Midjourney V6 The best way to get consistent faces 16x image upscaling No, that AI-generated influencer was not making 10k/mo HEADLINE 3 things to know about Midjourney V6 Midjourney V6 Alpha was released on December 21, 2023. A wonderful early Christmas present. The reception has been overwhelmingly positive. Now that the dust has settled a bit, here are 3 very cool things to...

4 months agoΒ β€’Β 4 min read
Share this post