nash.learning package¶
Submodules¶
nashpy.learning.fictitious_play module¶
Code to carry out fictitious learning
- nashpy.learning.fictitious_play.fictitious_play(A: ndarray[Any, dtype[_ScalarType_co]], B: ndarray[Any, dtype[_ScalarType_co]], iterations: int, play_counts: Any | None = None) Generator [source]¶
Implement fictitious play
- Parameters:
A (array) – The row player payoff matrix.
B (array) – The column player payoff matrix.
iterations (int) – The number of iterations of the algorithm.
play_counts (Optional) – The play counts.
- Yields:
Generator – The play counts.
- nashpy.learning.fictitious_play.get_best_response_to_play_count(A: ndarray[Any, dtype[_ScalarType_co]], play_count: ndarray[Any, dtype[_ScalarType_co]]) int [source]¶
Returns the best response to a belief based on the playing distribution of the opponent
- Parameters:
A (array) – The utility matrix.
play_count (array) – The play counts.
- Returns:
The action that corresponds to the best response.
- Return type:
int
- nashpy.learning.fictitious_play.update_play_count(play_count: ndarray[Any, dtype[_ScalarType_co]], play: int) ndarray[Any, dtype[_ScalarType_co]] [source]¶
Update a belief vector with a given play
- Parameters:
play_count (array) – The play counts.
play (int) – The given play.
- Returns:
The updated play counts.
- Return type:
array