Skip to main content
added 210 characters in body
Source Link
Engineer
  • 30.4k
  • 4
  • 76
  • 124

It's trivial enough to determine actions via a fast formula (including bitwise ops) or a LUT (lookup table which could be a dense array, sparse array or hashmap), given the available info coming in via AI senses. Combine with FSM and you'll have a simpler, quicker, more debuggable system.

I don't think you really need inference, here! You could instead demand various arguments to the formula and just leave unknown ones as defaults (e.g. zero) such that they don't contribute (much) to the end result in terms of actions to be taken by the AI.

I suppose one benefit you would get from Rete is the ability to "remember" without a full recalc each time... but I doubt that's something to be concerned about given your relatively tiny datasets. Bear in mind that any graph structure like Rete's tends to be much slower than dense or even sparse arrays of equal node count, as CPU cache benefits greatly by linear alloc / read-ahead as opposed to the random access style that dynamically-allocated graph nodes demand.

It's trivial enough to determine actions via a fast formula (including bitwise ops) or a LUT (lookup table which could be a dense array, sparse array or hashmap), given the available info coming in via AI senses. Combine with FSM and you'll have a simpler, quicker, more debuggable system.

I don't think you really need inference, here!

I suppose one benefit you would get from Rete is the ability to "remember" without a full recalc each time... but I doubt that's something to be concerned about given your relatively tiny datasets. Bear in mind that any graph structure like Rete's tends to be much slower than dense or even sparse arrays of equal node count, as CPU cache benefits greatly by linear alloc / read-ahead as opposed to the random access style that dynamically-allocated graph nodes demand.

It's trivial enough to determine actions via a fast formula (including bitwise ops) or a LUT (lookup table which could be a dense array, sparse array or hashmap), given the available info coming in via AI senses. Combine with FSM and you'll have a simpler, quicker, more debuggable system.

I don't think you really need inference, here! You could instead demand various arguments to the formula and just leave unknown ones as defaults (e.g. zero) such that they don't contribute (much) to the end result in terms of actions to be taken by the AI.

I suppose one benefit you would get from Rete is the ability to "remember" without a full recalc each time... but I doubt that's something to be concerned about given your relatively tiny datasets. Bear in mind that any graph structure like Rete's tends to be much slower than dense or even sparse arrays of equal node count, as CPU cache benefits greatly by linear alloc / read-ahead as opposed to the random access style that dynamically-allocated graph nodes demand.

deleted 18 characters in body
Source Link
Engineer
  • 30.4k
  • 4
  • 76
  • 124

I'm wondering:

  • Why use a generalised, string pattern matching algorithm, for AI data that will already be in memory in a more efficient format than strings / symbols?
  • Since global game data is known per frame (else you would not have a simulation to speak of!), and is quite small (?), why bother with inference methods? Are you trying to emulate a more realistic perspective-based AI that can only operate off what its senses tell it? There are probably easier / more efficient ways of doing that than involving Rete... I could be wrong.

It's trivial enough to do what you needdetermine actions via a fast formula (including bitwise ops) or a LUT (lookup table) which could be a dense array, sparse array or hashmap), given the available info coming in via AI senses. Combine with FSM and you'll have a much simpler, quicker, more debuggable system. Index into several LUTs, e.g. first LUT = "is enemy nearby?", second LUT = "enemy strength", etc.

I don't think you really need inference, then combine the results of each such read according to some formula.here!

I suppose one benefit you would get from Rete is the ability to "remember" without a full recalc each time... but I doubt that's something to be concerned about given your relatively tiny datasets. Bear in mind that any graph structure like Rete's tends to be much slower than dense or even sparse arrays of equal node count, as CPU cache benefits greatly by linear alloc / read-ahead as opposed to the random access style that dynamically-allocated graph nodes demand.

I'm wondering:

  • Why use a generalised, string pattern matching algorithm, for AI data that will already be in memory in a more efficient format than strings / symbols?
  • Since global game data is known per frame (else you would not have a simulation to speak of!), and is quite small (?), why bother with inference methods? Are you trying to emulate a more realistic perspective-based AI that can only operate off what its senses tell it? There are probably easier / more efficient ways of doing that than involving Rete... I could be wrong.

It's trivial enough to do what you need via a LUT (lookup table) which could be a dense array, sparse array or hashmap. Combine with FSM and you'll have a much simpler, quicker, more debuggable system. Index into several LUTs, e.g. first LUT = "is enemy nearby?", second LUT = "enemy strength", etc., then combine the results of each such read according to some formula.

I suppose one benefit you would get from Rete is the ability to "remember" without a full recalc each time... but I doubt that's something to be concerned about given your relatively tiny datasets. Bear in mind that any graph structure like Rete's tends to be much slower than dense or even sparse arrays of equal node count, as CPU cache benefits greatly by linear alloc / read-ahead.

It's trivial enough to determine actions via a fast formula (including bitwise ops) or a LUT (lookup table which could be a dense array, sparse array or hashmap), given the available info coming in via AI senses. Combine with FSM and you'll have a simpler, quicker, more debuggable system.

I don't think you really need inference, here!

I suppose one benefit you would get from Rete is the ability to "remember" without a full recalc each time... but I doubt that's something to be concerned about given your relatively tiny datasets. Bear in mind that any graph structure like Rete's tends to be much slower than dense or even sparse arrays of equal node count, as CPU cache benefits greatly by linear alloc / read-ahead as opposed to the random access style that dynamically-allocated graph nodes demand.

deleted 42 characters in body
Source Link
Engineer
  • 30.4k
  • 4
  • 76
  • 124

After a cursory look into this algorithm, I'm wondering: (a) Why do you want to use a generalised, string pattern matching algorithm, for AI data that will already be in memory in a more efficient format than strings / symbols? (b) Since all data is known per frame (else you would not have a simulation to speak of!), and is quite small (?), why bother with inference methods?

  • Why use a generalised, string pattern matching algorithm, for AI data that will already be in memory in a more efficient format than strings / symbols?
  • Since global game data is known per frame (else you would not have a simulation to speak of!), and is quite small (?), why bother with inference methods? Are you trying to emulate a more realistic perspective-based AI that can only operate off what its senses tell it? There are probably easier / more efficient ways of doing that than involving Rete... I could be wrong.

It's trivial enough to do what you need via a LUT (lookup table) which could be a dense array, sparse array or hashmap. Combine with FSM and you'll have a much simpler, quicker, more debuggable system.

Just index Index into several LUTs, e.g. first LUT = "is enemy nearby?", second LUT = "enemy strength", etc., and then combine the results of each such read according to some formula.

I suppose the onlyone benefit you would get from Rete is the ability to "remember" without a full recalc each time... but I doubt that's something to be concerned about given your relatively tiny datasets. Also bearBear in mind that any graph structure like thisRete's tends to be much slower than dense arrays or even sparse arrays where yourof equal node count, as CPU cache benefits greatly by linear alloc / read-ahead.

After a cursory look into this algorithm, I'm wondering: (a) Why do you want to use a generalised, string pattern matching algorithm, for AI data that will already be in memory in a more efficient format than strings / symbols? (b) Since all data is known per frame (else you would not have a simulation to speak of!), and is quite small (?), why bother with inference methods? It's trivial enough to do what you need via a LUT (lookup table) which could be a dense array, sparse array or hashmap. Combine with FSM and you'll have a much simpler, quicker, more debuggable system.

Just index into several LUTs, e.g. first LUT = "is enemy nearby?", second LUT = "enemy strength", etc., and then combine the results of each such read according to some formula.

I suppose the only benefit you would get from Rete is the ability to "remember" without a full recalc each time... but I doubt that's something to be concerned about given your relatively tiny datasets. Also bear in mind that any graph structure like this tends to be much slower than dense arrays or even sparse arrays where your CPU cache benefits greatly by linear read-ahead.

I'm wondering:

  • Why use a generalised, string pattern matching algorithm, for AI data that will already be in memory in a more efficient format than strings / symbols?
  • Since global game data is known per frame (else you would not have a simulation to speak of!), and is quite small (?), why bother with inference methods? Are you trying to emulate a more realistic perspective-based AI that can only operate off what its senses tell it? There are probably easier / more efficient ways of doing that than involving Rete... I could be wrong.

It's trivial enough to do what you need via a LUT (lookup table) which could be a dense array, sparse array or hashmap. Combine with FSM and you'll have a much simpler, quicker, more debuggable system. Index into several LUTs, e.g. first LUT = "is enemy nearby?", second LUT = "enemy strength", etc., then combine the results of each such read according to some formula.

I suppose one benefit you would get from Rete is the ability to "remember" without a full recalc each time... but I doubt that's something to be concerned about given your relatively tiny datasets. Bear in mind that any graph structure like Rete's tends to be much slower than dense or even sparse arrays of equal node count, as CPU cache benefits greatly by linear alloc / read-ahead.

deleted 2 characters in body
Source Link
Engineer
  • 30.4k
  • 4
  • 76
  • 124
Loading
Source Link
Engineer
  • 30.4k
  • 4
  • 76
  • 124
Loading