Yojimbo2000's Soda with Super Simple Buttons?

First off:
@yojimbo2000’s mind-bogglingly great UI toolset for Codea is called Soda, and it is something everyone who uses Codea should see and get. Search ‘Soda’ and you’ll see it.

Now then:
Myself, a while ago, I posted a super-simple button system that basically worked like this:

  1. You write button("hi there") in your draw() function
  2. A button with the words “hi there” is drawn on screen
  3. By tapping a toggle in the overlay, you activate a draggable mode, and you can drag the button to any position you like, and the app remembers it even between launches
  4. You can set an action for the button manually in setup()

Besides being dead easy, I really like this because it mimics the format of the standard Codea drawing commands like rect() and ellipse(). It feels Codea-like.

Since then I’ve been expanding on this idea, mainly by attempting to make a tool for creating two-choice adventure games (games like DoubleChoose, the app I just released with my daughters). My work so far: https://gist.github.com/DolenzSong/ef91fd371fd13fe1c00a

What’s I like is that, using this system, a complete sample game definition looks like this (and this exact “game” is in the project just linked to):

function sampleScreenA()
    sprite("Cargo Bot:Game Area", WIDTH / 2, HEIGHT/2,WIDTH,HEIGHT)
    textArea("sample game start screen")
    choice("win sample game", sampleScreenB)
    choice("lose sample game", sampleScreenC)
end

function sampleScreenB()
    background(102, 94, 31, 255)
    sprite("Cargo Bot:Star Filled", WIDTH / 2, HEIGHT/2,WIDTH,HEIGHT)
    textArea("you won sample game")
    choice("reset sample game", sampleScreenA)
end

function sampleScreenC()
    background(75, 31, 31, 255)
    sprite("Small World:Explosion", WIDTH / 2, HEIGHT/2,WIDTH,HEIGHT)
    textArea("you lost sample game")
    choice("reset sample game", sampleScreenA)
end

(textArea() and choice() are extensions of the button() concept to different kids of UI elements, btw)

So:
@yojimbo2000, this is mainly a question for you:

  • Soda is so beautiful that I just can’t bear the thought of trying to re-implement any of its of features–rounded rectangles, for instance–when you’ve basically done it perfectly already.
  • The Soda system seems vastly incompatible with this system I’m developing. It’s object-based, not function-based, just for starters.
  • Is there any way I can keep the simplicity of my system but have the beauty of your buttons? It must be possible somehow, right? Can you give me any pointers on how to do it?

@UberGoober

First off, feel free to borrow any parts of the Soda code. The rounded rectangle code was by @LoopSpace , I added anti-aliasing and a stroke to it. The rounded rectangle tab should (I hope!) be reasonably self-contained and portable.

I do continue to think of ways to make ui design easier to implement. If I have time, an interface designer would be the next big tentpole feature for Soda.

I had a play with the code in the repo you link to, it’s nice.

Could you say a bit more about what you mean in the distinction between object/ function based? I doesn’t seem to me that our approaches are that different, ie basically button type, button name, callback? My grasp of the distinction between such terms is shaky (I’ve never been trained in coding), but I’m keen to learn more about it.

Your code doesn’t yet seem to remember the button position between launches of the program, and I wanted to ask you how you planned on implementing that, because that’s really a key design choice. Would you write position code directly to a program tab, so that the user can edit it directly; or to an external file (perhaps using json) etc, to keep a strict division between button design and button logic?

If it’s the latter, then there is the question of how to give the user appropriate hooks from the UI design stage back to their code.

Finally (I’m thinking aloud now), there is also the question of how to maintain universalizability: ie the button moves to an appropriate position when the display switches orientation, or is a different size. So absolute pixel positions aren’t enough, the designer has to be able to specify whether a position is absolute or relative in relation to each of the enclosing frame’s edges.

So, 3 questions (these are aimed as much at me, thinking about how to improve Soda in future):

  1. How to save/ load the UI design, save it to program tab, or as entirely separate file?

  2. If the latter case, how to provide hooks to the button logic code in the user’s program?

  3. How to maintain universalization, so that elements are placed intelligently regardless of device size/ orientation

I think generally that Pythonista is a good model to follow here, their UI designer is very nice. It keeps the design file separate, one file for each screen. WRT to 2) If I remember correctly, I think Pythonista has a single callback for the entire file. So you have a single button pressed function (I guess not that different from the builtin event-basd functions like touched or collide), and then it is up to the user to identify what has been pressed by querying the name of the button. WRT 3) it has radio buttons allowing you to specify which edges an element is relative/absolute to.

@Jmv38 any thoughts?

i agree with about everything you wrote above.
Considering ‘function’ versus ‘object’ approch: @ubergoober could merely wrap your object creation inside a simpler function that does everthing and has only the few parameters he wants. So he would benefit of your gorgeous design from a simple fonction.

@yojimbo2000: longer response to come soon, but FYI the reason the buttons don’t remember their position currently is because of the defineMainButtons() call in setup(), which hard-codes them. If you comment out that line, the positions persist between launches.

Alternately, you can go into draw() and add a new button, using button("new text"), and after you drag it around that button will retain its position with no need to comment out the hard-coded values of the other buttons.

The trick is accomplished in the pieceHandler.savePositions() function which saves each UI piece’s definition as a to a tab in the project called pieceTables as a global variable.

Okay a little breathing time in the day lets me respond better.

@yojimbo2000, direct answers:

  1. Saved to a separate tab.
  2. Having it all in an in-Codea tab allows direct inspection and editing. Each parameter is named clearly. If adjusted manually, the adjustments will persist.
  3. I’ll punt this to a discussion of scope, which follows.

@yojimbo2000, @jvm38, as to scope: two main points.

My project at present is not an API, it’s an adventure-game-creation app. In focusing on a practical application, my hope is that I will also end up implementing the most common low-level behaviors the function-based approach will need for it to be usable by others.

And even at its most fleshed-out, this concept is not intended to be terribly versatile. I really think the appropriate model to bear in mind is ellipse(). ellipse() is a powerful tool for beginning programmers and beginning projects, but its main value is ease-of-use. It does a few things well, and is a really important enabler of simple graphics explorations. But it’s not in and of itself well-suited for anything too complex.

Similarly, my concept with button(), and textField(), and the like, is to craft a very simple component system that is quick and pretty, and functional, but that is not in and of itself suited to much more.

So, for example, as regards yojimbo’s questions about universalization, the answer appropriate to this approach is “button() behaves exactly as ellipse() does. In both cases, the burden of responsive design lies elsewhere.”

function-based vs. object-based
@yojimbo2000, I should be clear that I’m making up this terminology on the fly. This isn’t an industry standard jargon or anything.

I think the best explanation is an example. Say I need to make a two-choice adventure game screen that does the following:

  1. Shows a custom cityscape background
  2. Shows a flying super heroine sprite at 300, 600
  3. Shows the narration “you see a bank robbery!”
  4. Shows choice 1 button labelled “stop it yourself!”, which leads to fightScreen
  5. Shows choice 2 button labelled “call for backup”, which leads to backupScreen

With a function-based approach, I do this by writing a function (and this exact code would be functional with the current state of my project).

function flyingHeroine()
    sprite("Cityscape", WIDTH / 2, HEIGHT/2,WIDTH,HEIGHT)
    sprite("heroine", 300, 600)
    textArea("you see a bank robbery")
    choice("stop it yourself", fightScreen)
    choice("call for backup", backupScreen)
end

And to show it, I add this to draw():

flyingHeroine()

Once it’s showing, I use editing mode to manually position the buttons where I want them, and I’ve defined my screen.

An object-based approach, on the other hand, might go something like this (and I’m speculating here): first I define the object:

flyingHeroine = AdventureScreen()
flyingHeroine.background = "Cityscape"
flyingHeroine.setSprite("heroine",300,600)
flyingHeroine.setTextArea("you see a bank robbery", ?,?)
flyingHeroine.setChoice("stop it yourself",?, ?, fightScreen)
flyingHeroine.setChoice("call for backup",?,?, backupScreen)
--the ?s denote screen coordinates that would have to be determined independently somehow

Then to show it, something like this in draw():

flyingHeroine.draw()

There’s a pain-in-the-butt extra step here of defining the x and y of the buttons and text, which seems pointless to detail, because you have to do it externally to the code at hand. It is non-trivial, though. But once you’ve done that, you have reached parity with the function-based screen.

Comparing and contrasting
The object-oriented approach potentially has a major advantage in that its values can be dynamically reassigned, quite easily and as-needed, during run-time. However, this is only an advantage if you need to do that.

The object-oriented approach has a major disadvantage in that using it requires understanding object-oriented concepts. This is particularly hard for newcomers. The function-based approach, on the other hand, is all simple imperative statements. Old-fashioned but much more obvious.

To me, the function-based code is much easier to write and understand, but YMMV.

Now, this last point is something I’m not really qualified to speak to, since I’ve never learned much about memory management. But as I understand it, the object-oriented approach may require keeping track of the object’s life cycle and de-initialization, whereas the function-based approach is more “set it and forget it”. But come to think of it, in my function system, the UI pieces themselves get loaded into memory by the pieceHandler, and never de-allocated. So I’m not sure how the scales balance there. At any rate, I bring this up because, to people who understand it better, this may be an issue.

I suspect I’ve explained this more than was required, but I think I’ve at least said enough to cover the major points.

@jvm38 just out of curiosity, have you looked at the Soda code directly? From the project example shown, using it requires much more than the creation of the objects themselves. Both draw() and touched() must be configured in exact ways, and any implementation of the buttons has to comply with the structure established therein. Perhaps you understand it better than I do, but as near as I can tell, simply wrapping the object in a function won’t cut it. Just to be clear: I’d be really happy to be wrong!

In contrast, my function-based approach requires no customization of anything. Declaring the button itself anywhere in the draw() cycle is all you need to do to have a working button.

@yojimbo200–before I forget–for someone with no coding training, you’ve done amazing things. Heck, you’ve done amazing things even for someone who did have training. I hope one day to be as untrained as you! :wink:

FYI the reason the buttons don’t remember their position currently is because of the defineMainButtons() call in setup(), which hard-codes them. If you comment out that line, the positions persist between launches.

When I comment out that line, although the position persists between launches, the buttons no longer do anything

@yojimbo2000, Yes, this is all a result of me hard-coding those specific values. The button actions are part of the hard-coding too. So when you turn off the hard-coding they don’t get any actions assigned.

Part of my model here is that you can define a button somewhere besides the draw function, and then only refer to it by name in the draw function. That’s how the hard-coding works currently.

The button “to editable mode”, for example, gets fully defined–position, action, etc–in defineMainButtons(). Then wherever it’s actually being drawn, it’s only referred to by name–because all the other values have already been set. So when you turn off defineMainButtons() it just acts like a default (actionless) button, because no action parameter is supplied in any of draw methods currently calling it.

But perhaps that’s inconsistent. Maybe I should remove the ability to define values outside of direct function calls. Maybe it’s too confusing. Also it breaks the analogy to ellipse()-style functionality–ellipse() doesn’t allow you to define an ellipse by name and then refer to it by that name later.

That would make my buttons even less object-y and even more function-y, which perhaps means I should do it just for consistency. Hmmm.

Sorry I’m not sure I understand, I thought the point of your system was that the person coding the UI can just issue a bunch of minimal button commands, and edit the positioning of those buttons later. But positioning the buttons seems to break the buttons’ functionality, so I don’t really see the point of it I’m afraid (in it’s current implementation).

The analogy to ellipse() is an interesting one, but ellipse is just a graphic operation, it has no touch events, no logic. I don’t think it’s a particularly useful analogy to aim for, for a couple of reasons:

First, you seem to be doing all touch logic in the draw event, without a touched function, just using currentTouch, which strikes me as a little brave! Eg, generally across iOS, buttons trigger on touch end events. This gives time for the button-press animation to register with the user, which is a really important piece of visual feedback. Now, I’m sure you could program a touch ended event without using the touched function and just using currentTouch, but it would be a bit of work that seems unnecessary, when we already have the touched function built-in. I’ve never used currentTouch, ever (weird :smile: ), so I could be wrong about this, but I imagine that the kind of gesture differentiation that @Jmv38 introduced to Soda would be very difficult without the touched function (eg is the user swiping a scrolling list of options, or selecting one of them?) .

The second thing is, that it seems a bit disingenuous to say “the user just has to issue the button command in the draw function, just like using ellipse”, when actually it seems that the button still has to be defined (in the defineMainButtons function) in pretty much the exact same way as it is in Soda. I don’t really see how their approach is different. I’d say that Soda is actually less work (particularly for more complex stuff), because Soda handles all of the drawing and touched logic for you. With super simple buttons it seems you have to define the button (the same as in Soda), but then also issue specific draw commands for each element (which gets complex if you have popups, overlapping elements etc).

These are just my impressions based on a few minutes playing with super simple buttons, please do point out if I’ve misunderstood stuff.

Broadly though I agree that it’s a question of scope. If you’re coding an “app” rather than a game, or a game that is app-like (ie a strategy title with sliders, scrolling lists etc), then Soda is a good bet. There is a greater expectation with an “App” that you can use it in any orientation, whereas games can be single orientation. But if your need is just basically buttons, then Soda is probably overkill. But then you get into the situation of coding bespoke ui code for every project you start, which is a waste of time. Libraries, by their nature, have to be somewhat comprehensive. Still, maybe a “Soda Lite” might not be a bad idea.

So, this is my fault, because super simple buttons can, currently, be used both ways: predefined (i.e. very similar to Soda) and completely on the fly (like ellipse()).

And that and means they can be used both ways at the same time–some or all of the attributes can be predefined and specified at run-time (with run-time specifications taking precedence).

Quote by you:

“The second thing is, that it seems a bit disingenuous to say “the user just has to issue the button command in the draw function, just like using ellipse”, when actually it seems that the button still has to be defined (in the defineMainButtons function) in pretty much the exact same way as it is in Soda.”

…not disingenuous though: thing is, as described above, it doesn’t have to be done like Soda. It just can also be. The “just like ellipse” functionality does actually exist, but it’s confusing because I’m not using it exclusively in the project I shared.

I think before any more discussion I should go and change all the code to only use run-time specifications, and in fact remove the ability to do anything the Soda way. That will be much simpler to code anyway, albeit slightly wasteful.

Ok, I thought I wasn’t quite understanding it, that makes more sense. I remember the first super simple buttons demo you did, around the same time I was starting Soda, it’s definitely a cool idea.

Thanks yo! I thought it would be slick of me to put both kinds of functionality into these buttons. But this discussion makes it really clear that I should pick one approach and stick to it. More anon.

@yojimbo2000: oh and FYI, CurrentTouch does actually deliver a touch with ENDED state to these buttons during the draw cycle; that’s exactly what they use to trigger their actions.

Re CurrentTouch. You’re absolutely right, now I feel stupid.

The rounded rectangle code was by @LoopSpace

To the extend that I can, I release my code as CC0 (the times when I can’t are if it’s derived from something else). I’m not sure which version of the rounded rectangle code @yojimbo2000 is using but all of my original code ought to be CC0.

(Of course, attribution is a nice way of saying “Thank you for the useful code”.)

Regarding a different theme in this discussion, my UI (which looks positively ancient compared to Soda, but functionally is adequate for my needs which is why I’ve not looked at Soda) is modular. I use @toadkick’s cmodule project for importing libraries and so what I do is import only those modules that I want in my UI prior to importing the main UI module. So the start of one of my projects might look like:

cimport "Menu"
cimport "Keyboard"
cimport "NumberSpinner"
ui = cimport "UI"()

This creates ui as an instance of my UI class with the capabilities for menus, a customisable keyboard, and a number chooser. But I haven’t loaded the code for various other bits and pieces (such as a keypad, a colour wheel, textboxes, and so forth). So it only loads in what I need and ignores everything else.

The UI class is like a delegator. I don’t interact with menus or keyboards directly but ask the ui instance to do it for me.

Turning to yet another aspect of the above discussion, I find it useful to define locations relative to particular points on the screen. So I’ll say “I want this menu to be at (50,-30) relative to the top left corner of the screen”. Then when the top left corner of the screen changes, all these get updated relative to the new screen orientation. This requires some messing around with transformations behind the scenes, but I have code that sorts all that out in terms of “anchors”.

In one project, I want the main project stuff to remain fixed relative to the iPad (so if the iPad is upside down so is the picture) but the UI stuff should always appear the right way up. So I have a few auxiliary functions that sort out that stuff to ensure that everything’s in the right place on the screen.

Have updated the gist:

  1. Now only uses dynamic UI piece commands (ellipse-like)
  2. Now uses RoundedRect from Soda (and LoopSpace)
  3. My daughter chose new art for the win and lose screens :smiley:

@yojimbo2000: I want to try to incorporate your fancy blur effect, but in the Soda project it requires monkeying around with the draw method. Do you think there’s a way I could use it more in keeping with my method?

Cool, will look at this later.

Re blurriness: No, I don’t think so. In order to blur what’s under a panel, the function has to have access to your code’s drawing. It then draws everything again to the texture image, with blurriness applied, but stops when it gets to the blurred panel itself (so that the panel and its elements itself are not included in the blurriness, just what’s under them).

It just does this once, when the blurred panel is created, it doesn’t update live.

I don’t think it’s too onerous though. Just put all your drawing in a function called drawing(breakpoint). It encourages you to keep a strict separation between updating and drawing, which is good practice, I think. (eg there are lots of scenarios where this is useful: say if your camera is following the player, you want to update all positions, resolve collisions etc, before you can set your camera and start drawing your scene. Or if you’re doing 2-player on one device split-screen, you need to draw at 2 locations, but you only want to update once. etc)

(I think Codea’s draw function is badly named, it should be update or something)

I guess though with super simple buttons, updating and drawing are all in the same function. It would mean that the button logic would be run twice in a cycle , on those cycles where a blurred panel is created or re-oriented. I guess that’s not the end of the world, but you would have to catch callbacks being called twice etc.

I do think that bundling drawing and touch logic in one function, and using currentTouch is not a good idea though.

I think users expect pretty much all mobile apps to be multitouch. Almost everything I’ve written is. So that means using the touched function.

I think you could still have myButton:draw() and myButton:touched() and it would still be “super simple”

I understand the big-picture objection to CurrentTouch, but I would like to stick with my current approach of only adding features as I need them for my project. I don’t currently have any need for multitouch.

I also am trying to stick to my guns regarding ellipse()-like usability; I did just finish revising a lot of my code to remove things that weren’t ellipse()-like, after you observed my inconsistency here. So I don’t want to turn back now . If a thing can’t happen in the button’s own function, I can’t do it.

Here’s a thought about blurring, and it’s probably more trouble than it’s worth: the first super simple piece that is drawn in the draw function flips a bufferedGaussian flag, and then runs the whole draw() function itself, but to a buffer–and because of the flag, none of the super simple pieces render. After that draw-within-a-draw completes, we return to the main draw loop, flip bufferedGaussian to false, and now the super simple pieces can render using the buffer as a background image to apply their blur to.

This would obviously not be foolproof–but again, I’m trying to make it work for my immediate purposes before worrying about global usefulness. And with my current code, it might work, mightn’t it?