howeird on 14/5/2020 at 02:38
[video=youtube;qC5KtatMcUw]https://www.youtube.com/watch?v=qC5KtatMcUw[/video]
Hopefully this comes to PC. One programmers for the game speaks and they show in-game play.
Sulphur on 14/5/2020 at 03:18
There's no hopefully about it. It's a multiplatform engine from the get-go and backwards compatible with UE4. Word is they're planning to launch it in 2021.
demagogue on 14/5/2020 at 03:59
No gonna lie, I looked at that like Homer looks a donut, gargling noises and all.
henke on 14/5/2020 at 05:48
Very pretty! Tho it won't get me to switch to UE for my own gamedev. Y'know, just because you use UE5 doesn't mean your game is gonna look like that. It still requires a lot of work and artistry. And I know that with the amount of effort I put into my visuals, my games are gonna end up looking pretty much the same whether in Unity or UE, so I'll be sticking with the one I know.
Thirith on 14/5/2020 at 05:58
When I look at this, as someone who doesn't really know all that much about engines, I wonder to what extent this makes development easier and to what extent it requires production values that are insanely expensive. I can see how more flexible, nuanced lighting syste,s or self-generating lower detail levels for game geometry etc. could save time, but at the same time I wonder whether such engines wouldn't also require models and environments at massively higher detail levels to begin with for the results to look considerably better than games already do. How much of a new engine is dedicated to intelligent automation (such as the Inverse Kinematics stuff)? And how much is about more polygons, more shaders, more detail and the like?
Sulphur on 14/5/2020 at 06:24
My understanding of the current production pipeline (guys in gamedev, correct me if I'm wrong) is that art assets are initially made at as high a quality as the artists can go, after which they're baked into low-poly versions for the game with 'hero' assets that are the focus of any given scene given more of the polygon budget. The Nanite tech seems to promise that you don't need that step any more, as the tech will take care of it for the artists so they can just go whole hog and not worry about optimising the end result.
Now the bigger question of what kind of fidelity are artists expected to crunch at this point for AAA: given the amount of clutter you can add to a scene with the new generation, there's definitely going to be more asset creation required. But that's really one of the big reasons why asset libraries like Quixel's with its photogrammetry scans (the entire scene is based on their limestone quarry assets) are going to be more and more important.
Renzatic on 14/5/2020 at 06:41
Banging out high res assets is actually fairly quick, and texturing them isn't that overwrought of a process, since at the resolution we're talking about here, you're really more concerned about coating your various objects with the appropriate material, rather than painting in specific details.
Not having to retopologize your highpoly meshes for a low res bake, and banging out various LOD models would save TONS of time. Those are usually the most time consuming parts of the entire asset building process, easily taking up 50% of your time at least.
demagogue on 14/5/2020 at 06:42
What would really make this engine shine is if somebody cooked up a system for procedural asset generation and animations that are lifelike.
Alright, so this is my vision of the, or an, ideal future of indie game-making.
For players and AI ... if you've seen Fuse, you just take a stock human model, then you can pop it into a model program and tweak it (there's the in-game parameter-based tweaking, like cheekbone, nose shape, etc, etc, but it's easy to do just in Blender from the base), then you click a button and it rigs it and attaches it to a skeleton that already has a stock of like 300 animations that automatically work for that model. Then you make that open source so people continually grow the base of animations, clothes, etc, kind of like Second Life. And then you have Euphoria, which has a bunch of procedural human reflexes and this context-sensitive procedural gesture stuff. It isn't hard to imagine extending it to major categories of animals and creatures, quadrapeds, birds, fish, etc.
For static objects, what'd be great is AI-generated objects from photo sources. So you just grab of photo of an object (maybe a set of photos from 6 directions, but a good AI system could predicatively fill gaps), click a button, and then AI generates a textured object out of it, and it could predict the material type on it, or you just paint it on, etc.
And then for geometry, something like Voxel Farm, or other procedural systems, where a system generates a real world geometry based on geology principles (age of the planet, weathering, climate, etc.), and then you can edit it from there, like Minecraft except at the voxel level, which then coverts it to pixels.
All this kind of tech has already been made. It's just a matter of it all coming together in a unified system. So the idea is, you click a button and it generates a realistic world geometry that you modify from there, you drag in photo sources that generate textured static objects, and you have the human/animal generator that you tweak and add animations, place everything... The point is, if procedural generation of realistic assets catches up with this, it's going to be great for people to make realistic maps and focus just on the placement, art design, etc. All the pieces have already been made; it's just a matter of bringing them all together. And in an engine like this, it'd bring a lot of power to individuals who otherwise wouldn't be able to do anything with an engine like this because of the prohibitive difficulty-curve of making realistic assets by hand.
Thirith on 14/5/2020 at 08:07
To what extent might it make things more difficult for games with a non-realist aesthetic if the tools are all geared towards one form of realism or another?
demagogue on 14/5/2020 at 12:27
I saw a tutorial for Fuse that gave me thoughts about that issue. So the point of Fuse is that the algorithm procedurally rigs a bipedal model to a skeleton that already has 100s of animations attached to it. In the tutorial, the guy took the base model of a human, and to make a long story short, he had a method that best-fit the vertex-arrangement to this troll monster he'd made that was wildly non-realistic and nothing at all like the original model. But the beauty of it was that, because it was still vaguely bipedal, the vertex grid still lined up for the rigging algorithm to work, and his massive and cartoonish troll could automatically use the skeleton and all the animations. And then IIRC he could even pop it into Blender's animation shop and tweak all the pre-made animations to just wiggle a little more here and there to fit the model's aesthetic.
The basic punchline is, if you have a really good general procedural system, it's actually going to (or they should make it be able to) make it easier to go for a non-realistic aesthetic because you get all the groundwork taken care for you, and all you have to do is add the bits that make it distinctive, vastly cutting down the overhead to making such an asset. I understand the risk of what you're saying, and my response is the tools should be made in a general format that works with even really alternative visions, and I didn't really emphasize it earlier, but I was thinking from the start about categories like "vaguely bipedal", "vaguely quadrupedal", "vaguely fish-like", etc, etc.
Edit: Sorry, I just realized you're talking about UE5's tools not what I said. But in that case ... the thing is a billion pixels is a billion pixels. I have an idea people are going to come up with some really otherworldly stuff if they're really unbounded, anything that can be modeled can be put to screen.
I'm much more worried about the organizational culture of the studios that have the resources to take advantage of this tech, i.e., financially risk adverse, meaning low-risk, hypersensitive to acceptable ROI, focus-group approved "safe" aesthetics, than the tech itself gearing art direction one way or another. I think any good artist can develop their own style with any decent tech if left to their own devices, but financial and organizational pressures are the real limiting factor IMO.