Pyrian on 15/3/2018 at 13:35
I recommend a needs hierarchy instead of an FSM. It really is the same thing only better. A needs hierarchy is better organized, easier to think about, less prone to weird bugs, and generally just better in every conceivable way. Take that top image, for instance:
Priority 1: Health points are low? Find aid.
Priority 2: Player is attacking? Evade.
Priority 3: Player in sight? Attack.
Priority 4: Wander.
Note that in this system, transitions from any state to any state are easily covered without turning the diagram into spaghetti.
Thirith on 15/3/2018 at 14:08
Note that I know nothing about AI programming, so the following may be painfully naive, but looking at that I can't help but want something a bit more complex and interesting and less black and white, e.g.
Priority 1: Health points are low, but there's no obvious aid (health packs, friendly units) in sight, so depending on a number of variables (e.g. skill, equipment, courage, intelligence, aggression) the unit might go for a suicide attack, because finding aid is unlikely to succeed, so at least they can go out doing some damage to the player.
Priority 2: Should evade and attack be different actions by definition? Should an enemy unit be able to shoot from the hip while they're evading, perhaps suppressing the player in the process? Also, enemies might have more personality if some of them are cautious while others are cocky.
Priority 3: I'd want the enemy to have some situational awareness. A dude with a pistol that spots a player wielding a BFG would be suicidal to attack directly. There may be cover nearby from which they can attack just as well, but with less risk to themselves. Or there might be friendly units about that they could team up with.
How would either an FSM or a needs hierarchy do this kind of thing?
Nameless Voice on 15/3/2018 at 19:05
What you're talking about there is a priority-based system with more complex priorities.
That's not a state machine, it's a dynamic hierarchy.
The AI would have a set of potential goals, and it would score each of those goals based on various criteria, stack them up, and perform the goal with the highest score.
An example goal score might be for finding health, which starts off at zero, goes up depending on how injured the character is, but also goes down if there's no health nearby, or depending on what they think their chances of beating the player are, based on their weapons and the player's health. All of that could be worked into a single mathematical formula which would give a concrete score value.
As a random aside, this "goals with priorities" system is how Isaac Asimov wrote the Three Laws of Robotics. They were not actually written in a strict hierarchy, but instead the lower laws could overrule higher ones depending on the priority value that the situation gave each law.
The simple example was that a robot wouldn't put itself in extreme danger just because it was casually told to, even though doing what it was told was Second Law and self-preservation only Third Law, because it deemed its own value and worth to be higher priority than a casual command. The laws are in a rough order, but the exact priority is based on a calculation of their current value.
Pyrian on 15/3/2018 at 20:16
Thanks for the link, icemann, that's great.
Quote Posted by icemann
So it looks like FSM's was what they went with.
So it looks like icemann didn't read the paper he linked. Pity, 'cause it's really pretty cool.
FSM's are too generic, and the number of transitions can easily spiral out of control in practice. Read the linked documents' takedown of FSM's on page 3; they used it before, and it, um, failed to help solve complex problems. As I demonstrated above, an FSM is significantly improved by putting it in a hierarchy; easier to build and understand. (Technically the basic hierarchy IS an FSM, but... A transistor is an FSM. That's the "generic" problem.) The so-called "FSM" that FEAR uses is entirely
below the decision making process and just specifies to the animation system whether its a moving animation or not. (IMO that's not an FSM at all, not really.)
Anyway, yeah, at the top level the FEAR AI is a hierarchy:
Quote:
We need to assign a Goal Set to each A.I. in WorldEdit. These goals compete for activation, and the A.I. uses the planner to try to satisfy the highest priority goal.
The "planner" is interesting, though. It's a glorified pathfinder. Literally A*. They have a bunch of actions which, like goals, can be assigned to individuals (as something that unit can do). The planner picks actions that form a "path" to completing the goal. The example given in the document is a soldier that can either reload then shoot, or move forward, jump over a desk, and make a melee attack. The planner can decide which one to do based on which takes longer! (Although they can set whatever weight they want for various actions, it's not literally just time taken.)
Interesting stuff. Glade Raid uses more of a straight hierarchy system, with a few subhierarchies ("what KIND of attack should I make?"). The pathfinder just feeds data into it, providing a list of reachable-this-turn hexes in time-to-reach order (making it very easy to find the closest hex that has line of sight, for example - I just iterate through the list until I find one).
Anyway, I don't think I'm ever again going to swallow the "smoke and mirrors" criticism of FEAR's AI. Yeah, they don't "really" do everything it looks like they do, but what they DO do is pretty cool.
Starker on 15/3/2018 at 22:08
Quote Posted by Pyrian
Anyway, I don't think I'm ever again going to swallow the "smoke and mirrors" criticism of FEAR's AI. Yeah, they don't "really" do everything it looks like they do, but what they DO do is pretty cool.
I'd say that smoke and mirrors are just as important to game design than the real thing, if not more. Some of the ways FEAR made the AI look more intelligent were pretty ingenious, actually -- an NPC would first decide on an action, then another NPC would give him the order to do the action, creating the illusion that they were following commands.
And there's a ton more stuff like this that devs do in order to create a better (or the intended) experience: (
https://twitter.com/Gaohmee/status/903510060197744640)
catbarf on 15/3/2018 at 23:23
Quote Posted by Pyrian
I recommend a needs hierarchy instead of an FSM. It really is the same thing only better. A needs hierarchy is better organized, easier to think about, less prone to weird bugs, and generally just better in every conceivable way. Take that top image, for instance:
Priority 1: Health points are low? Find aid.
Priority 2: Player is attacking? Evade.
Priority 3: Player in sight? Attack.
Priority 4: Wander.
Note that in this system, transitions from any state to any state are easily covered without turning the diagram into spaghetti.
When I was in college I did a lot of research on FEAR's AI for a class on developing AI in gaming. I wholeheartedly recommend reading the paper icemann linked to, it's got a lot of good information about how game AI works. I would disagree with the assessment that a needs hierarchy is easier to think about and less prone to weird bugs, and in fact, I'd argue the opposite.
I remember my team designed an AI that prioritized getting close enough to the player to fight, followed by getting into cover as a secondary. What happened in practice was that the enemy would be triggered from a long distance, it would start to run close, and then as soon as it got close enough, the desire to get into cover would take priority and it would double back towards cover behind it. At which point it would be far enough away that the desire to get closer would take priority and so on and so on. It would just stand there and vibrate, unable to decide between two courses of action as they quickly changed priorities.
It wasn't too hard to fix, but FSMs don't have to worry about that sort of thing at all, because you define the conditions that drive an AI into a different state. If we implemented our AI as an FSM, we could rigidly define the transition conditions between each activity such that the back-and-forth behavior wouldn't occur. If a similar sort of behavior did crop up from an unwanted transition condition, then it's easy enough to adjust, whereas with a needs-based system messing with the weights and conditions to solve one problem can easily have ripple effects on the rest of the system.
It's fitting that Nameless Voice mentioned Asimov's rule of robotics- the entire point of I, Robot is to illustrate how three non-rigid priorities are inadequate to govern artificial intelligence, and could conflict and interact in unexpected and undesired ways. Every short story is about an unexpected emergent behavior from the application of a needs-based hierarchy. Despite its wholly fictional context it's actually a really good illustration of why developing AI is so hard.
As for 'AI that's too good wouldn't be fun'- I'm not so sure about that. As FEAR shows, smart AI that doesn't 'cheat' (respond to information it cannot realistically have) is a more interesting source of challenge than just giving dumb AI extra health. I could definitely see a shooter, particularly one geared towards realism, use the intelligence of its AI as a selling point. It wouldn't even necessarily have to be particularly difficult, as you can always make other aspects of the game easier to compensate (eg don't give them aimbot accuracy). Personally I would much rather get frustrated at an enemy being believably intelligent than sponging up my bullets and beating me through attrition.
On that note, I know some of you folks have played Alien: Isolation, did you notice the Alien's ability to learn? I thought it was a nice touch that the more you use fire against the Alien, the more wary it becomes of it, but the more aggressive it gets once you put the weapon down or turn away. It's got to be a pretty simple mechanism under the hood, but it really adds to the immersion.
Pyrian on 16/3/2018 at 01:34
Quote Posted by catbarf
I wholeheartedly recommend reading the paper icemann linked to, it's got a lot of good information about how game AI works.
I did. It's pretty cool.
Quote Posted by catbarf
I would disagree with the assessment that a needs hierarchy is easier to think about and less prone to weird bugs, and in fact, I'd argue the opposite.
Despite having in the same paragraph recommended a paper which uses a needs hierarchy as its top level of AI?
Quote Posted by catbarf
It wasn't too hard to fix, but FSMs don't have to worry about that sort of thing at all...
Nonsense. I've literally worked on FSM's doing the
exact same back-and-forth thing. (A hierarchy is little more than a type of FSM anyway.) And frankly, it was a lot harder to fix in the FSM precisely because of the focus on the transition rather than the goal meant that fixes tended to fail whenever things got complex (you fix one transition, but other transitions and other combinations of transitions end up doing the same thing), while the hierarchy simply involves less complexity up front,
especially as it scales. Fixes on a hierarchy are one and done, fixes on non-hierarchical FSM's end up being the ones that are all over the place and having unforeseen ripple effects.
Here's the bottom line: Lacking the need to define multiple transitions per state (transitions that have an obnoxious tendency to trend towards exponential), hierarchies fundamentally accomplish the same thing with less logic. It's a more elegant solution that will always be easier to build and maintain precisely because there's less of it
to build and maintain.
Quote Posted by catbarf
...we could rigidly define the transition conditions between each activity...
How can you not immediately see the problem with this? Exponential growth of special case handling. NOT a good plan. Not a good architecture.
I mean, think about the case you yourself cited. In the hierarchy, you fix the cover seeking behavior to seek cover in range. One and done. And hey, it doesn't just fix the case where the closing in transition needed that logic, it
also fixes
any cases where the guy WASN'T closing in and needed that logic. If you have an FSM and just fix the closing-in transition, you still potentially have a guy who's already in range moving out of range to get cover, because that isn't covered by that particular transition. You've just built yourself a maintenance nightmare, precisely because FSM allowed you to fix a symptom instead of fixing the problem.
EDIT: tl;dr: Just read (
http://alumni.media.mit.edu/~jorkin/gdc2006_orkin_jeff_fear.pdf) this cool document icemann linked to for yourself. It's saying much the same thing I am about FSM: "...it is the complexity of the combination and interaction of all of the behaviors that becomes unmanageable", with a bunch more other interesting stuff as well.
Slasher on 16/3/2018 at 02:26
I was watching a video of still-unreleased Far Cry 5 and the target tracking wall hack thing looks so tacky. Whatever happened to just sneaking around not knowing exactly where your antagonists were?
You think they could put at least a modicum of effort into explaining it. Assassin Vision. Outsider gifted powers. UAVs. Spy satellites. The Force. Unicorn farts. I just came up with a bunch of ideas better than FC5's and I'm not even paid to. No One Lives Forever 2 gave you tracking darts you could shoot at enemies.
I think in previous FC games, tracking can be disabled. But I'm pretty sure it's on by default, so I'm going to consider it a "feature."
Bucky Seifert on 16/3/2018 at 05:53
Long, drawn out, non-optional, un-skippable tutorial sections for games that are really not that complicated. Looking at you, Pokemon Moon. I've been playing these games for 2 decades, I know how to play!
Also not a fan of overly long dialog. I don't mind reading, but there are several CRPGs that just feel like they were written by someone who thought the longer and more drawn out the dialog is, the more intelligent it will seem.
And, finally, real time in a 4X game. I'm sorry but it's what prevented me from getting into Stellaris. Yes I know, you can pause but even so, I can't get into a 4X game if it's not turn based and I can really think over my decisions.