Gurney Journey | category: Computer Graphics | (page 2 of 11)


Gurney Journey

This daily weblog by Dinotopia creator James Gurney is for illustrators, plein-air painters, sketchers, comic artists, animators, art students, and writers. You'll find practical studio tips, insights into the making of the Dinotopia books, and first-hand reports from art schools and museums.

Bringing Old Photos to Life

Old photos provide a window to life in the past. A great deal of information is contained in those photos, but a lot of visual data has been lost, too—not just the color, but other features such as the subsurface scattering.

A couple of recent digital innovations have helped to bring old photos and paintings to life. There's a lot you can do with Photoshop, but there are limits to what you can accomplish with denoising, colorization, and superresolution. 

The result here has reduced some of the cragginess of the original Lincoln photo and made him look younger, but presumably that could be dialed differently. 

'Time Travel Rephotography' is a technique for recreating the natural, full-color appearance based on the the original photograph and an input photo of a contemporary person. The metrics of the modern person are shifted to match that of the historic person.

The way to test this method would be to take a photo of a contemporary person using an antique process and see if you could restore the missing information to match a high-res photo of that person.  

Another digital reconstruction tool is My Heritage, an app that takes a photographic input, or even old paintings or statues, and animates them with blinks and turns (Link to YouTube video). 

Because it draws power from large data sets, the results have some convincing nuances, such as the movement of bags under the eyes. I think it would actually be more effective if the movements were more limited and subtle.  

Combining these techniques and animating them with a motion-captured actor's performance would yield even better results.


More about Time Travel Rephotography on Two Minute Papers

Thanks, Mel and Roger

Siggraph's 2020 Demos

Siggraph is a conference of computer graphics research, now held virtually. Their Asia subchapter has just shared some of the new technical papers demonstrating new techniques for digital animation and graphics. (Link to YouTube

Here's another recent video (Link to YouTube) with highlights of their main conference. There continues to be remarkable progress in surface flow dynamics, secondary actions on deformable objects, and artistic style transfer to video source animation. 

These brief demos serve as a preview of digital tools and techniques that will filter down to individual artists, commercial cameras, and visual effects in movies. As a traditional painter, I'm fascinated to learn how these scientists analyze and reproduce familiar phenomena of the visual world 

New App Adds Detail to Blurry Image

New software is able to take a low resolution image and add missing detail. 

New App Adds Detail to Blurry Image

The tool supplies missing information using a generative adversarial network. It draws on a big data set to generate a plausible looking face that matches the pixellated version.

New App Adds Detail to Blurry Image

Researchers at Duke University who developed it describe the process this way: "The system scours AI-generated examples of high-resolution faces, searching for ones that look as much as possible like the input image when shrunk down to the same size."

The resulting face is photographically detailed, and it fits the initial pixellated image, but it's really only one of several possible solutions.
Articles about the process from Hypebeast and Techxplore

Wall Street Journal's Hedcuts

Wall Street Journal's Hedcuts
Top row: hand-drawn portraits; Bottom row: computer-generated versions.
The Wall Street Journal has developed an artificial-intelligence system for creating its distinctive 'hedcut' portraits.
Wall Street Journal's Hedcuts
Human-created hedcut of Grumpy Cat, 2013, courtesy Wall Street Journal
Hedcuts have a wood-engraving or scratchboard look, made up of dots and dashes.

Wall Street Journal's Hedcuts
Left: human-created 'hedcut' of actress Chloë Grace Moretz
Right: AI-created hedcut, courtesy Wall Street Journal
The AI learned the style and produced adequate results in most cases. But there were difficulties. One challenge was "teaching the tool to render hair and clothes differently than skin, which was often a matter of whether to crosshatch versus stipple."

Wall Street Journal's Hedcuts
Error cases caused by AI working with too limited set of data,
courtesy Wall Street Journal
"The most harrowing issue of all was overfitting, which happens when a model fits a limited set of data too closely. In this case, that meant the machine became too satisfied with its artistic ability and began producing terrifying monstrosities like these."
Read more on Wall Street Journal: What’s in a Hedcut? Depends How It’s Made.

Semantic Paintbrushes

Painting using artificial intelligence or machine learning may not have completely arrived yet, but at least the software can make educated guesses about textures and colors.

The brush has a "semantic" understanding of what sort of thing it is painting, meaning it "understands" whether it's sky, water, mountain, or a building, and it generates appropriate forms based on a giant database of existing images. Link on YouTube

Machine Hallucination

Refik Anadol has created the effect a 'machine hallucination' at ArtTecHouse in New York City.

Link to YouTube video Textures projected on flat surfaces suggest flowing textures and floating numerals. 

Link to YouTube video The effect is to disorient the viewer, forcing them to question whether the apparent hallucination emerges from their own consciousness or from somewhere else..
Via Design Boom

Humans Team Up with Computers to "Breed" Images

Humans Team Up with Computers to

Here's a piece of digital art made by ArtBreeder, a website that generates images by machine learning, and then lets you "crossbreed" them to create new offspring.

Humans Team Up with Computers to

Here's how Recomendo describes the process: "Using deep learning (AI) algorithms it generates multiple photo-realistic “children” mutations of one image. You — the gardener — select one mutant you like and then breed further generations from its descendants." 

"You can also crossbreed two different images. Very quickly, you can create infinite numbers of highly detailed album covers, logos, game characters, exotic landscapes."

Humans Team Up with Computers to

The software currently doesn't "understand" the meaning of writing, but only the appearance of typographic letterforms, so the system churns out images that resemble evocative album cover designs.  

Humans Team Up with Computers to

You can also create landscapes that look almost plausible, or combine dissimilar environments and see what results.

Humans Team Up with Computers to

Some images appear to morph organic textures with humanoid forms, like this "feather-boa yeti." You could start with an image like this as reference, and then elaborate it with your own old-school sketch process.

Humans Team Up with Computers to

Because the judgment of flesh-and-blood humans assists the computer in shaping the evolution of these images, the process yields different results than a generative adversarial network acting alone.

If you want to play with the software, it's free at ArtBreeder.
Thanks, Dan

Your Face as an Old Master Portrait

Your Face as an Old Master Portrait

A new website lets you upload a photo of yourself and it will modify it to look like an old master painting.

This is not an example of style transfer that we've seen before on the blog. The AI erases smiles and changes lines, colors, and shapes, and it decides on which period art style to use. 

Your Face as an Old Master Portrait

With this system, according to their website, "anyone is able to use GAN (Generative Adversarial Network) models to generate a new painting, where facial lines are completely redesigned. The model decides for itself which style to use for the portrait. Details of the face and background contribute to direct the model towards a style. In style transfer, there is usually a strong alteration of colors, but the features of the photo remain unchanged. AI Portraits Ars creates new forms, beyond altering the style of an existing photo."
See a gallery and try it out at AI Portraits
Article about the algorithm on Fast Company: AI Portraits
Thanks, Armand

Clebsch Maps

Clebsch Maps are a way of visualizing the pattern of dynamic movement within fluids, such as air or water.

They're particularly useful for creating images of what happens in the air around flapping wings.

Here is a hummingbird in flight with a Clebsch Map showing the air velocities around it.

Each flapping wing is surrounded by tube-like vortices of quickly spinning air. Here they're rendered to look like glistening plastic wrap or glass tubing.

This YouTube video shows Clebsch Maps in various applications. This research will have practical applications for understanding the flight of insects and drones, as well as for creating new CGI techniques in the visual effects industry.
Study by Ren. Y Dong, H. Deng, et al.

Bringing Old Photos to LifeGenerating a Flyover from a Single ImageSiggraph's 2020 DemosNew App Adds Detail to Blurry ImageWall Street Journal's HedcutsSemantic PaintbrushesMachine HallucinationHumans Team Up with Computers to "Breed" ImagesYour Face as an Old Master PortraitClebsch Maps

Report "Gurney Journey"

Are you sure you want to report this post for ?