This is the beginnings of a cross between artifical life and recode based on using the values within the pixels of objects to define thier behavior.
This may not be what the wiki is intended for but the code along with a lengthy discussion of what it attempts to represent can be found here https://bitbucket.org/TwoLivesLeft/codea/wiki/Limage
This sounds really interesting. I love that you set up a wiki page about the idea.
Will have to try out the code.
Super work Ipda41001!
I’ve only spent a short time with it but so far I love what you have done. I’ve been paying attention to “programming games” and core war / redcode myself of late and over Xmas, as my first ‘larger’ lua project, implemented (more like: kludged together) a version of BF Joust in Codea. (I’ll upload it eventually but It still has at least one big silly bug in the interpreter which I want to fix before doing so). I find the realm of non-textual esolangs fascinating and see Living Image(s) as a marvelous step in that direction. Pixel genes!
LIM = 21st c. turtles?
Wow… each day the Codea environment becomes richer and richer… thanks to TLL and the incredible generosity of this community.
@sanit’s code, the discussions it generated and some previous discussions were very inspiring
It is funny when the description of code becomes longer than the code itself
1.1 posted https://bitbucket.org/TwoLivesLeft/codea/wiki/Limage
More interesting behavior.
Taking opinions on next steps. Should combat, true sight, and a few other things be added to lead to design competitions? Now that it’s better encapsulated I could begin on adding a designer, tracking, and remove random Living Images as the default.
Sorry for not responding sooner. I like the idea of design competitions!
I did have a 1.2 I haven’t uploaded yet. It is just moving around the functions based on the advice from the “sending a parent class into a child class”. It also exposes genetic drift and something called viscosity (It’s not actual viscosity, it’s just a parameter that was used to slow creature movement).
Anyway, I noticed two things after posting 1.1. First, for the small creatures, with the added logic that makes movement conditional, they need their movement values. to move them farther. Second, six 32x32 creatures run fine on the ipad1. I’m also envisioning battles between a twitter 32x32 or 16x16 icon and a google plus one.
I’ll get 1.2 uploaded at some point and then set to work on 1.3/1.4. Those will be some sort of design feature possibly borrowing from Spritely, combined with taking advantage of the text features in 1.3.
I’m planning on taking a few weeks off to play/enjoy Codea 1.3 not related to this. In addition there are some other global extracurricular activities that require attention this week.
1.2 posted https://bitbucket.org/TwoLivesLeft/codea/wiki/Limage
I’ll look into adding a image loader.
Any ideas on the type of competion? Used as is, it could be which one takes over first between a starting point of even contribution like 3 on 3 etc.
Shooting and/or biting could be added as well as pinpoint vision (regular sight rather then the distance they use now).
@Ipda41001 thanks for the 1.2 update!
Sadly, at the moment I don’t have much to say in regard to feedback… I need to spend more time with your code and I’m afraid time is scarce at the moment.
That said, I do find that it is hard to see and make sense of the individual pixels of 32x32 Lims --even when the world is scaled up. (Code-wise I wonder what could be achieved with a 6x6 Lim and instruction pointers which could move in 4 directions?) I also find that the surrounding gray ‘cell border’ is too wide in proportion to the pixel ‘code’ but that is purely a visual issue.
A ‘spritely’ like editor would be good as would be IMHO the ability to inspect (and perhaps alter?) the details of a particular LIM at any point while the simulation/game is running.
Re: competition ideas: Are you familiar with Barcode Battler? I know it as a piece of hardware that was released a couple of decades back but noticed recently that the idea has been ported to the iPhone (I downloaded the app but haven’t yet played with it…)
Edit: Just went to look it up myself but forgot about the black-out!
I was trying to not post during the black out but couldn’t last 210 more minutes.
I think I need to work on a way to have the “world” independent of HEIGHT and WIDTH (but still use them as default settings). Then the “world” would be like bugs in a box (though keeping the world wrapping). I could then implement zoom in/zoom out/follow a bug/examine a bug without interfering with the size of the “world”. The grey cell border could then link to display size not world size.
Barcode battler sounds familiar, I’ll have to check it out after 200 minutes or so.
Triple tap for new random world
Zoom parameter for zooming in on large creatures (Lims)
Up/Left parameters for panning around zoomed worlds
Run parameter for halting the simulation
Naked creatures (grey borders removed)
It also abstracts height and width but I haven’t spent the time to apply that to zoom and clip to the visible box (the box without the hidden border for smooth world wrap movement).
Not really sure how to implement follow as opposed to examine. I was thinking single tap for both. Possibly a single tap is examine with a follow feature. I’m sort of holding off on examine until Codea 1.3 so that text (numbers) are easier to display. Loading designed Lims will also be easier with Codea 1.3 since the height/width can be returned from a designed image.
I could work on tracking “winners” (heritage).
I’m hesitant to change to the move to four directions similar to implementing true sight. I’m leaning towards a cell model where backwards movement isn’t common and sideways is more like flight.
However maybe it’s time to implement multiple behavior models.
My primitive cell preference
A tank-like movement - two treads for rotation, full or near full reverse, true sight, and maybe a gun/laser
full free 4 directional movement
Actually there are many combinations of preferences here. For instance there is true sight but limited to a direction and field of vision and then omnidirectional true sight.
Possibly the time spent pre-Codea 1.3 is best spent on implementing multiple models.
Nice, TLL, tweeted the Codea 1.3 submission while I was rambling on.
Yeah it’s finally submitted
I tried to keep living image()s in mind while making changes to the image class – to make sure nothing would break.