Menacing: A MENACE emulator

Rather inspired by today’s Neural Networks exam, and Anoop’s amazing tutoring on the topic, I created a simple MENACE emulator in Python. Both the source and the Win executable are available below, and licensed under an MIT/Academic Free License. In other words, do anything you want with it, and try to give me credit ๐Ÿ™‚

A simple menacing in a console would show you a tic-tac-toe board with a player choice to make. menacing --state shows the current state of the matchboxes. menacing --debug shows how the program thinks as it plays the game. menacing --train [iterations] allows you to train the matchboxes using a computer vs. computer mode.

If you download the Windows executable bundle, go to the dist folder to find menacing.exe. It’s a console application, so you’ll have to run cmd.exe, navigate to this folder and then execute it for maximum satisfaction. If you’re downloading the source, run menacing --init first.

I’d love it if you post your experiences with menacing here. Thanks ๐Ÿ™‚

How do you train it from a dumb saved matchbox? Initially play to lose (say for about 10 tries, and then play to win always, I find this to be the best method to train). Or use --train and your work is done for you ๐Ÿ™‚

Learn more about MENACE here and here.

The Windows bundle is slightly out of date as of this posting, should be resolved soon. So get the source for the latest.

Source at Github

12 Replies to “Menacing: A MENACE emulator”

  1. Maybe wrap all exit(0) calls with a simple unprocesssed raw_input()? Open the command-line first and then run it, you’ll be able to see the –state fine then. Or use –debug, that’s more interesting.
    The channel is amazing, and yeah, nearly undoable in Python with the same level of flexibility (no GUI modeling for instance.) Python/Win32com (or Python/wxWidgets) can emulate it, but not without a lot of work manually writing .rc files and so on. It’s a shame, because combining the core language with a GUI modeler would’ve been amazing. Hope somebody out there is working on something like this.

    Like

    1. Was just reading this after 8 years. Man… I can’t believe I was saying all those things supporting VB6 above Python. Embarrassing! Internet history is a bitch ๐Ÿ˜‰

      … and I proved myself wrong with the Python.. That ad display thingy for RedSplash was done with Python and wxWidgets.. hehe. Irony!

      Like

  2. i can see why u love python.. but VB has its cuteness too ๐Ÿ˜‰ (see channel #26 ACV hehe, nearly undoable in python i guess :D).

    one general doubt on python. how do i pause the final output screen in windows? didnt download the executable, i’m running from source. bcoz of this i cant see what happens when i give the –state paramater. HELP!!

    Like

  3. I’ve updated the simulator to be more intelligent about winning situations, added a --debug state, and also made negative feedback more interesting. Try tutoring it now ๐Ÿ™‚

    Like

  4. Well, I meant, a way to take the human out of the training loop.

    When training, it will be the computer playing against itself.

    You could do that, lets say, a 100,000 times and hope to get a good result; a brute force method to learn.

    >the moves shouldnรขโ‚ฌโ„ขt be random for >maximum effect

    What I meant by random is, you start with a know-nothing state and after each looping you learn something.

    >who are you
    Clue:I think in 4d
    >IRL?
    Dont have one.

    Like

  5. Anoop: kinda shows the difference b/w VB and Python. Python is the coolest language ever! ๐Ÿ˜‰ Would love more feedback.

    Spacecro: It learns by reinforcing choices made in winning/drawing situations and removing them in losing situations. See this. And yeah, your idea about iterative learning would work, although, the moves shouldn’t be random for maximum effect… the algorithm should play to win… perhaps some form of zero-sum games implementation? Btw, who are you IRL?

    Like

  6. Looks cool.

    Unfortunately, was not there during Anoop’s tute session. Dont have a clue on how it learns.

    But, does it learn only from its own winning moves or can it learn from my winning moves too.

    If it can, then wont it be possible to train it automatically using iteration with the computer playing with itself? Start playing randomly, when somebody wins his recorded moves can be used to train the board.

    Ok its dumb if my assumption was wrong at the first place.

    Anyway, thanks for that wonderful code.

    Like

  7. boy it looks cool!!

    one night and u did it, eh? all i did was create 288 arrays in VB ๐Ÿ˜‰

    was just checking it out.. more comments as i play with it.. anyway looks really good.

    Like

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s