You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we set Task Forces, with Goals, either static goals or potentially multiple goals that AI can be routed in between depending on the situation. The question is, should we, or could we create a layer of intelligence that looks at the current goals, looks at the current available forces for tasking and intelligently tasks them outside of the hard coded system?
To work, all the system needs are Objectives and units to task. Potentially we throw away the ability to tell the units where to go, but instead allow it to make its own mind up how to approach the problem.
Typical processes:
The system knows there are 15 objectives, 3 of which are Primary objectives and 12 are secondary or intermediate. It has three groups available with more on the way. It will randomly assign one to each objective, using the distance to objective, routing through the Intermediate objectives. If it succeeds in occupation, it then begins to populate the intermediate objectives. If it loses groups/units it may choose to reinforce intelligently according to a ruleset. There may be cases where withdraw is a viable action.
PROBLEMS
We lose control of how the AI works but offset that against a cheaper and easier example.lua.
We would likely see a lot of weird behaviour and annoying behaviour.
if there are more objectives than available units and equal distances, the script may come up with a different solution each time and be inconsistent in it's tasking. Examples are: Going all the way towards an objective and then suddenly changing its mind and changing direction after a restart.
BENEFITS
For thiose that don't care they may enjoy the randomness. You could apply the script with absolutely no instructions and make no objectives. the units can still react. With one objective, its an all or nothing "pile in" scenario.
It would make for the ability to not put much in the example.lua at all or keep it simple at the cost of direct control.
The text was updated successfully, but these errors were encountered:
Currently we set Task Forces, with Goals, either static goals or potentially multiple goals that AI can be routed in between depending on the situation. The question is, should we, or could we create a layer of intelligence that looks at the current goals, looks at the current available forces for tasking and intelligently tasks them outside of the hard coded system?
To work, all the system needs are Objectives and units to task. Potentially we throw away the ability to tell the units where to go, but instead allow it to make its own mind up how to approach the problem.
Typical processes:
The system knows there are 15 objectives, 3 of which are Primary objectives and 12 are secondary or intermediate. It has three groups available with more on the way. It will randomly assign one to each objective, using the distance to objective, routing through the Intermediate objectives. If it succeeds in occupation, it then begins to populate the intermediate objectives. If it loses groups/units it may choose to reinforce intelligently according to a ruleset. There may be cases where withdraw is a viable action.
PROBLEMS
We lose control of how the AI works but offset that against a cheaper and easier example.lua.
We would likely see a lot of weird behaviour and annoying behaviour.
if there are more objectives than available units and equal distances, the script may come up with a different solution each time and be inconsistent in it's tasking. Examples are: Going all the way towards an objective and then suddenly changing its mind and changing direction after a restart.
BENEFITS
For thiose that don't care they may enjoy the randomness. You could apply the script with absolutely no instructions and make no objectives. the units can still react. With one objective, its an all or nothing "pile in" scenario.
It would make for the ability to not put much in the example.lua at all or keep it simple at the cost of direct control.
The text was updated successfully, but these errors were encountered: