Gurney Journey | category: Computer Graphics | (page 3 of 11)


Gurney Journey

This daily weblog by Dinotopia creator James Gurney is for illustrators, plein-air painters, sketchers, comic artists, animators, art students, and writers. You'll find practical studio tips, insights into the making of the Dinotopia books, and first-hand reports from art schools and museums.

Fake Fashion Models

Computers can create photo-real images of fashion models that never lived in the flesh.

In the short morphing video, hemlines rise and fall, stances shift, and genders blend into each other. (Link to video on YouTube)

The developers in Japan use generative adversarial networks to create the images. They have also set up their AI system to generate fake pop stars.

They say they plan to improve the technology to the point they can offer it to the apparel, cosmetics, and advertising industries.

They're also working with human co-creators to produce new Manga-style characters, saying the technology will help them "to significantly reduce the cost of producing content."
[Edit] The original links don't work, but here's another company that creates humans artificially: Generated Photos

Siggraph Previews Photo-Real Computer Graphics

Every summer when computer graphics experts gather at Siggraph, they reveal new technical breakthroughs. (Link to video on YouTube)

This year's preview video reveals realistic simulations of goopy frosting, ferrofluids, and human anatomy. It will be possible to change the lighting in photos, graft an art style onto a captured video, and seamlessly change what a talking head is saying by editing a text transcript.

Silly Rubber, viscoelastic and elastoplastic material
by Fang, Li, Gao, Jiang
These technical innovations will find their way into the visual effects that you will see in TV commercials, movies, and eventually consumer graphics programs.
Siggraph 2019 will 28 July - 1 August in Los Angeles

Creating People Who Don't Exist

Have you ever seen this person before?

Creating People Who Don't Exist

There's no chance of it because she was just created by a computer.

Creating People Who Don't Exist

A new website called "This Person Does Not Exist" uses generative adversarial networks (GANs) to make a new face from scratch, a face that no one has ever seen before. 

Creating People Who Don't Exist

Each time you refresh the page on the website, an entirely new face appears. The software outputs a variety of ages, ethnic backgrounds, settings, and lighting scenarios, and the faces are specific, not "average" or generic looking. 

And each one seems relatively consistent and logical, but if you search long enough you'll find problems with ears, jewelry, hair, or glasses.

Creating People Who Don't Exist

The creator of the page is Phillip Wang, a software engineer for Uber, who wanted to demonstrate the potential of GANs.

This technology will transform many aspects of computer graphics, such as video games, visual effects, and 3D modeling, and it has more unsettling implications for generating convincing false news, sham celebrities, and fake art.

Extracting and Animating a 2D Character

Scientists have developed software that can extract a figure from a 2D painting or photo and bring it to life. 

The computer fills in the background and animates the character, so that it can walk or run out of the picture frame. (Link to YouTube)

The software works with photos, drawings, or even Picassos. Once the computer recreates the figure in 3D space, it can create a video or even a virtual projection that can be viewed with augmented reality goggles.
Scientific Paper: Photo Wake-Up: 3D Character Animation from a Single Photo
Thanks, Dorian

Digital Doug Project

Visual-effects company Digital Doman is working to create the long-dreamed-of virtual actor.

Called the "Digital Doug Project," it starts off by replicating the face and mannerisms of Doug Roble, Director of Software Engineering at Digital Domain.

The team begins by making extremely detailed scans of Mr. Roble, capturing the way the facial features wrinkle and pucker, and the way the blood flow changes with different expressions.

Digital Doug, courtesy Digital Domain
The method uses machine learning to process the data from an informal performance capture, made without all the special markers. The result is a virtual Doug that any actor could puppeteer.

It offers a hint of the powerful toolset that is emerging to assist movie makers and gave developers.

The ultimate output of this technology will be a whole cast of virtual actors, both human or non-human, which can be precisely controlled by any performing actor.

(Link to YouTube video)
Meet Digital Doug: Digital Domain's Doug Roble On Motion Capture
The Near Future of Performance Capture
Unreal Engine podcast about Digital Doug

A.I. systems that generate photo-real video

Computer networks are getting pretty good at synthesizing video from fragmentary sources, as shown in this latest production from Two-Minute Papers (link to YouTube).

Photo-realistic expressions at right are generated
purely from the line drawings at left. Source
The generative adversarial networks can effectively create video from animated line drawings, as in the still frame above. They're also getting better at classifying the elements of a scene into its various components and translating one class of objects into another. So, for example, a tree-lined street can be changed so that it's lined with buildings instead, or vice versa.

This latest iteration does so without as many weird jumps or gaps.
Read the scientific paper here as a PDF

Realistic Graphics via Voxel Cone Tracing

Creating photo-real graphics in real time on a normal computer is difficult because it requires calculating light transport, or the infinite potential pathways of light rays that interact with the subject at hand.

By using a technique called voxel cone tracing, a remarkable level of naturalism is possible within fast computational times, which makes it applicable to video game graphics.
Watch the YouTube video by Two Minute Papers

Computers can guess what you're seeing

In previous posts, we've seen how artificial-intelligence systems can generate captions for images, and can even create a photo-real image from a written description.

Computers can guess what you're seeing
A deep-learning algorithm can tell what you're looking at by scanning your brain. Photo CNN
Now, scientists in Japan report that the same kind of technology can guess what you're looking at from your brain activity alone. The process uses fMRI brain technology, which scans blood flow in the brain in real time.

Computers can guess what you're seeing
After a period of training, the system generates a short natural-language caption based on what it supposes you're looking at. Sometimes the captions are right on. For example when a person was looking at the photo to the left, the computer correctly hypothesized that the image showed: A man is playing tennis on the court with his racket.

When the guess was a little wrong, it was wrong in an interesting way. Once, when the subject being scanned was looking at a man kayaking on a river, it concluded: A man is surfing in the ocean on his surf board.

Right now the technology is limited by practical issues (the subject has to lie down in a big expensive machine). However, it could become part of an efficient brain-computer interface (BCI), which requires close two-way communication between the user and the machine. 

It also portends a variety of creep-factor sci-fi implications. For example, conceivably authority figures would be able to monitor "thought crimes" based on what's going on in your brain.
Scientific article: Describing Semantic Representations of Brain Activity Evoked by Visual Stimuli
Popular article on Futurism: What Are YOU Looking At? Mind-Reading AI Knows
Text to Image Synthesis
Computers are Learning to Caption Photos
Style Transfer by Deep Learning
Fake Fashion ModelsSiggraph Previews Photo-Real Computer GraphicsCreating People Who Don't ExistExtracting and Animating a 2D CharacterDigital Doug ProjectA.I. systems that generate photo-real videoRealistic Graphics via Voxel Cone TracingSiggraph 2018 HighlightsComputers can guess what you're seeingThe Garden Party Film

Report "Gurney Journey"

Are you sure you want to report this post for ?