Springheel on 29/8/2007 at 01:25
We're having a discussion over at TDM right now about whether to implement location specific barks (where AI say things like, "I wonder if he's behind that crate" to create the illusion that they understand their environment). At the moment, we're not quite sure how to do it without making a significant performance hit that might outweight the benefits.
I'm just wondering if anyone knows how TDS did this? Maybe they had a method our programmers hadn't considered. Is this something that works without any input from the mapper, or do you have to set specific properties on objects?
Judith on 29/8/2007 at 07:44
AFAIR, AI doesn't recognize specific objects in the game, there are some barks like "Let's see what's in this shadow", but I'm not sure whether they can recognize lit areas from dark areas. However they have something like "recognition classes" for the items (you can assign it manually, for some objects they're set as default) - out of place, stolen, moving, etc. It's a set of generic barks, performed when the torch is doused, something moved, G. stole an object, etc. But that's all.
Jeshibu on 29/8/2007 at 14:51
I assume this method was already proposed, but could you explain what exactly would cause the hit in performance?
Guard is suspicious of an intruder, decides to search a certain area. From this "search goal" position, a horizontal (a plane; wouldn't want to catch objects above that position) radius check to the nearest "mentionable object", which would be a special class that would either be left null if not applicable, or "crate" if crate, and so forth. If you already have something that defines whether something is a crate you could use that too, but you would have to do a more elaborate object check with the distance check with parameters to see if it's a relevant object ("wonder if he's near that flask?" would be a bit silly)
Of course I don't know exactly what sort of system TDM uses and how hard this would be to code (or if there even are move-goals for searching that you could search for the objects from).
Ziemanskye on 29/8/2007 at 17:17
You know, brute force worked for like 90% of Deus Ex.
ForEach(AllActorsInRadius(....
type things.:p
However, I'm with Judith on this: it doesn't know you're behind things (as far as I know/remember), it's just generic stuff. "I wonder if he's hiding behind that...?" while looking in any given direction rather than actually working out what it is, which does at least save having to have extra sounds (and perhaps a willing to work on demand voice actor) for any concievable object.
I'm sure there'd be some fairly cheap way of doing it - like working backwards and having the NPCs mention some object flag for what's closest to the character rather than having the NPC actually look at anything. PC passes flagged object close enough to affect the light gem, remember what it was so any guards comment on that object if that situation/bark comes up.
Or something: people who're better programmers than me have a habit of being creative at smoke-and-mirror effects.
Bardic on 29/8/2007 at 18:32
It might just be best for mission creators to script it themselves. They could put volumes near big crates or other objects with scripts like:
when entered by NPC
if NPC is alerted
random 10% chance to say "blah blah"
then a command to go search there.
Authors could copy the volume for any locations they want it at, and TDM team would only need to create audio files for different large objects.
Or, you could create a system where certain large objects autamatically get a volume assigned around them. The computer would only have to calculate things when an NPC was near that object.
Springheel on 29/8/2007 at 20:04
I'd best just quote one of our programmers:
Quote:
I can't think of any elegant way to implement them, other than brute force setting variables on objects to tell the AI what type of bark they should use. Then do some kind of bounds check around the alert location or the point they've decided to search, and find the closest object with that variable set.
A check of this sort would have to flood portal areas, because if it went thru a wall and found something on the other side, the location bark wouldn't make any sense. It seems like it would also have to check all entities. If they say, for example, "over there, by that desk!" We don't know what form the FM author will choose for the desk. It could be a func_static, it could be a moveable desk you can push around, it could even be a door if it slides aside to reveal a secret opening underneath. In all these cases, the FM author also has to remember to set the variable to tell the AI that it's a desk.
[EDIT: Maybe it would work to integrate this info into the AAS grid data, so AAS points near a desk are labeled as such. Then the AI can flood out along the AAS grid until they find a label for what's near that point. This still has many of the same drawbacks though, since the FM author still has to label things somehow.]
There's also the wrench that the AI must then actually GO to the place they've just talked about...it wouldn't make sense for them to say, "I wonder if he's behind that desk," and then walk in the other direction. So there would have to be some kind of tie-in to where the AI actually intended to search.
I thought TDS actually had this. Is it just my imagination? If not, that removes one of the arguments for trying to include it in the first place.
Krypt on 29/8/2007 at 20:05
For TDS we came up with a big list of objects and locations to be referenced in context-specific barks. These were usually very generalized, like desk, crate, stairs, street, etc. Every applicable object had a bark tag property on it that would tell the game which barks apply to that object. For location-specific barks, the designers would place volumes with a bark tag over the applicable areas in every level. Whenever an AI was about to do a "He's over there!" type of bark the game would check whether the player was within a certain distance of a tagged object or if he was inside a tagged volume, then plays the appropriate bark.
It wasn't very processor intensive, it was just some extra work on the designer's part.
Jeshibu on 29/8/2007 at 20:36
Well, now we have a pretty definitive answer on how Thief 3 did it. Thanks Krypt. :)
That sounds like a good enough solution, kind of like what I proposed. Except I'd make it check its distance from the place the AI intended to move instead of the player's location. The question is, do the AIs in TDM have an intended place they want to move to you can reference, or just a general direction to wander in?
Krypt on 29/8/2007 at 23:51
Quote Posted by Jeshibu
Well, now we have a pretty definitive answer on how Thief 3 did it. Thanks Krypt. :)
That sounds like a good enough solution, kind of like what I proposed. Except I'd make it check its distance from the place the AI intended to move instead of the player's location. The question is, do the AIs in TDM have an intended place they want to move to you can reference, or just a general direction to wander in?
The AI code would place temporary, invisible "Evidence" markers in the world that would give the AI a location to investigate. For example, if the player threw a cup across the room and it landed by a desk then it would mark the spot where it hit with a sound evidence marker. If the AI hasn't detected the player himself then the bark code would check against the locations and types of Evidence markers to decide which context-sensitive bark to play. In this case it would be something like "Hey, I heard something by that desk!"
sparhawk on 31/8/2007 at 06:20
Quote Posted by Jeshibu
I assume this method was already proposed, but could you explain what exactly would cause the hit in performance?
The perfoprmance hit could be when many objects have id tags, because you have to do distance tests, to find out which object is closest to the point of suspicion, and distance tests, are often quite expensive. Another issue is, that the distance test alone wouldn't suffice, because if the nearest objects is behind a thin wall, you still wouldn't see it, so this would have to be take into account as well. And then there is the issue with glass walls. So if you have a glass wall between the object, it's solid, but you still would see it, but shouldn't use it, because the AI should recognize that the sound should have come in front of it (or behind it depending on the circumstances). And what happens if the visibillity determines that there is something between the object and the source, but this is only small cup that gets in the way? And I'm sure there are much more instances where a human has an easy time telling but it's hard to code.
So there are quite a lot of things to consider, which all take time. And with many objects this adds up.