Codea Enhancements: Learnable Programming

@toadkick, OK, I think I gotcha, that is fair enough. I know all to well about the finite amount of time and effort that can be invested into something, so I grok where you are coming from. There are a lot of features folks already want and the learnability of the tool is not really a limiting factor for active community of folks already using it.

.@xavian I think the learnability could be improved, especially by following the philosophy expressed in Victor’s essay. But improving learnability should be achieved by making Codea a better tool for all users (e.g. better autocomplete, better documentation).

(.@username links don’t work properly at the start of a line, so just add a dot in front of the “@” symbol to ensure they link.)

I agree completely @Simeon, and better autocomplete and documentation would be a boon to all.

You know, I tried to get thru that article - really. And I wasn’t going to say anything, but since the thread is here… If I tried to rebut everything I see on the internet that I see as “Hogwash” (good word, there, Andrew!), well - I’d never get anything else done. (“An article I disagree with, Robin - To the Batcave!”)

He has some valid points - and some points I disagree with 100%. And those points poison the rest of the article. Like a proof - if one of your postulates is false, the whole thing goes out the window. He may be correct for the concept of “learning to program” in some fuzzy, out-of-focus generic way - but what he suggests is exactly the opposite of what you unfortunately need to do to actually learn a programming language you can use in the real world - what you need to know to actually get something done. It is unfortunately a fact that learning to program requires learning a language, and learning languages, even artificial ones, can be hard - you simply must memorize a vocabulary and rules of grammar, and if some IDE spoon-feeds things to you, you’re unlikely to memorize things nearly as quickly or thoroughly.

The way to learn a language, natural or otherwise, is immersion. Jump in feet first. Make mistakes. Learn from them.

Right off the bat, in the first section: " JavaScript and Processing are poorly-designed languages that support weak ways of thinking, and ignore decades of learning about learning."

That’s lovely - but you know what? If you want to program for the web, you NEED TO LEARN SOME JAVASCRIPT. Plus, you could say that about most real languages noways - they’re not designed to be easy to learn, they’re designed to be powerful/expressive/stable/etc. If you want to do an iPad app - even using Codea - to get it into the app store, you’re going to have to learn Xcode and a smidgen of Objective-C, and those aren’t easy, and if they ‘… ignore decades of learning about learning’, well - Boo Hoo. In reality, few languages are designed for “learning” - BASIC and Pascal come to mind. And neither of those are considered reasonable languages today for general programming (I always hark back to the fact that Wizardry was written in Apple Pascal - to be fair, Pascal extended with the language features that were missing and necessary to write useful graphics code). “Learning” a language you can’t then use for the real world is almost useless - I guess once you had basic concepts, you could move onto a real language, but I still say immersion is better.

But that’s not what put me off the rest of the article. His first example (lovingly done as a graphic, so you can’t cut and paste it, sigh) of an ellipse, had this commentary:

“Likewise, guessing the third argument of the “ellipse” function isn’t “learning programming”. It’s simply a barrier to learning. In a modern environment, memorizing the minutia of an API should be as relevant as memorizing times tables.”

I would say this is fundamentally wrong. It’s like saying learning the rules of grammar and vocabulary aren’t part of learning a natural language. If you’re going to use a real language and do real programming, the meanings of the parameters for function calls are something you either need to KNOW - without an IDE spoon-feeding them to you - or at least know how to look up. And a good way to memorize them is to change them - tweak the example, see what happens, without knowing ahead of time. The element of “what if…”, the element of surprise, the learning to give it a shot without knowing for sure if it will do what you want, is FUNDAMENTAL to programming.

By making his “learning environment” do things like show what parameter changes what on a hot-mouseover, he is doing the exact opposite of having you learn - what he’s teaching you here is that you don’t need to know what those parameters do, because something will show you. He’s teaching you to be crippled, to be unable to function outside of an IDE. Don’t get me wrong - that stuff would be fine in an IDE that’s not intended to teach, but is intended to assist - it’s ok to use a calculator once you already know and understand the math. But the first time Joe learner is caught outside of the IDE, they’ll be largely helpless - very little memorized, and they don’t even know for sure where to go to look for answers, because their learning environment always held them by the hand.

Maybe I’m old. There’s a part of me says the best way to learn to program is to type in code from magazines, by hand, and puzzle out what the hell all of that means - to make the inevitable typos, and have to read the error messages, and work out what you did wrong, and why what you did had the effect it did. I know - those days are gone, good riddance, it was hard work - but it’s how I learned BASIC (Atari 400 with a membrane keyboard, baby), and it taught me more than the language - I learned troubleshooting skills, and that I am smart enough to be able to do whatever I want to on a computer if I apply myself - and I don’t buy that the “new learning” will be nearly as effective as jump-in-with-both-feet.

Codea already does the one thing it needs to do for people wanting to learn to code: It provides them the opportunity do do so, lowering the barrier to entry to something that a total noob may still feel ok with. It brings back the best parts of old 8-bit programming - the immediacy of being able to sit down and code NOW without a tremendous IDE and application framework being in the way. My daughters don’t know a desktop computer from Adam - to them, the ipad is the computer, and by the time they get older, I suspect for personal computing a tablet will be the rule rather than the exception. If we told them “To learn to code, you’ll need to download 3 gigs of X-code, and we’ll start with this giant Objective-C framework” - we’re done before we begain. Codea provides them the opportunity to learn to code without such a giant barrier to entry. Codea does not, and should not, try to TEACH them to code. That’s what school, the internet, tutorials, and their dear old father are for. And you know what? Maybe we’ll sit down together and make a thing where they slide sliders around to change the ellipse, and then they’ll KNOW it - not because someone handed it to them - because they BUILT it.

Enough of the rant - you kids get off of my lawn.

.@Bortels I first saw that video a couple of days ago on HN, and it’s been bothering me ever since, and I couldn’t quite put my finger on why. I think you’ve nailed it for me. Thanks.

I have to admit - I didn’t watch the video (was at work). And - the article is NOT completely without merit; the only thing I object to are some of the conclusions.

I’m gonna bitch about Java for a bit. There’s a point - I promise.

At work, Java programmers are a dime-a-dozen - and the VAST majority of them can’t program their way out of a wet paper sack. There are exceptions, but for every competent Java programmer, we have 10 that aren’t worth the space they take up.

I’ve been trying to figure out why - and the conclusion I’ve come to is that they didn’t learn “computer programming” in college - they learned Java. Moreover - they learned, usually, the Eclipse IDE, and about some common Java libraries and hosting environments (JBOSS). They have little or no knowledge of the environment the JVM is running in, or how to debug said environment (or frankly even the JVM itself), so - they’re ultra fragile. They can handle it if nothing goes wrong, but the real world isn’t like that, and when something in the library they’re using doesn’t do what they need it to do - they’re totally lost. Zero troubleshooting skills, Zero knowledge of the OS (it’s freaking Red Hat Linux! it’s free! Go install it!). Zero knowledge of our environment. And Zero interested in learning any of that.

Case in point - the default java https library (or the one they’re using, at least) will fail a web GET call if the certificate isn’t 100% perfect - correct, not expired, etc. This is normally desirable. That is - until they’re trying to connect to a test server, one we don’t control, that has a self-signed cert (and expired, to boot). We said “oh, well, it’s QA - just ignore the fact that the cert is self-signed and expired. With wget, use --insecure, with curl it’s -k - Java must have some way to do that, right”. I suspect it does - but our “Java programmers” were totally high and dry. Everything was a black box, all they knew was how to plug in the parts - they weren’t even able to diagnose the issue, they had ME look at the logs to find out what was wrong.

Apparently, the way to “fix” this is to subclass the https class in that library, and then ignore the cert issue - but this was completely beyond even the comprehension of these guys. They had so little real-world experience with programming, they couldn’t even google for and imlement a solution. (The final “solution”: we paid a couple hundred bucks, and got a “real” certificate, for that box, and the next time this happens they’ll be right in the same boat, because that’s how the real world works)

Needless to say - even explaining the issue to them was almost impossible; SSL is non-trivial, and even really smart guys can get confused over certificates and signers and keys and such and so. Java (or JBOSS, or both) makes it worse by using a “keystore”, which adds a level of indirection on top of things, and as far as I can tell adds almost no benefit.

(PS for those in the know: We use exactly zero EJB features; this all could run on tomcat just as well. But - they “know JBOSS”, so they get JBOSS, and you get this amusing lost kitten look when you ask them about Enterprise Beans. Which is good - they’re heinously non-performant. Still.)

Why did I tell you all of this? here’s why:

I fear people who “learn” with an IDE that holds their hands too much are going to learn the IDE - not how to program. I know these boys did. My advice, to management, was to replace them with old-school C systems programmers, and have them spend a few months learning Java. (no - they didn’t do this, unfortunately).

Disclaimer: I was a “professional” Java programmer for just under a year - went in knowing nothing but “hello world”, learned enough to see just how shoddy everything was in about 60 days. This was at a prior employer - I won’t say who, PM me if you’re curious. I was hired for RADIUS ( http://en.wikipedia.org/wiki/RADIUS - I know it backwards and forwards from my 8 years running PacificNet, a local ISP ) knowledge - my “let’s learn java” proof-of-concept server did 8000 auths per second (object reuse! as opposed to 6000 in my perl ‘reference’ version), and the “real” version I was supposed to help them troubleshoot? 10 to 12. per second. Goal was 40. I pointed out why it wasn’t working (they completely re-built all of the data structures on every call; zero concept of what “initialization” means, or of how badly they were thrashing the GC), and they transferred me to another department rather than have someone point out the emperor has no clothes. Oh - and to top it off, because they didn’t use real-world data (real world data is messy) when they coded, they were dumping stack almost every call, multiple times. Idiots, the lot of em.

.@Bortels wrote:

I would say this is fundamentally wrong. It’s like saying learning the rules of grammar and vocabulary aren’t part of learning a natural language. If you’re going to use a real language and do real programming, the meanings of the parameters for function calls are something you either need to KNOW - without an IDE spoon-feeding them to you - or at least know how to look up.

Actually this is a place I very much agree with Bret Victor. I hate inefficiency. If a computer can do something so that I don’t have to, then the computer better do it for me. And it better do it well.

This includes memorisation, english grammar and spelling, and coding.

For example, ever since it was introduced a decade ago, I have always relied upon having a dictionary built into every single text view in Mac OS X. I can hover over any word and see the English definition. This has made me a better writer and increased my vocabulary. The harder it is to look up words, the less I will do it (whether consciously or not, it doesn’t matter.)

I never want to look up API documentation. But I have to because it’s obviously not all going to fit in my head. So the quicker and easier that is, the better. If it’s right on top of the words I just typed, then perhaps that is best of all.

Memorising this sort of stuff is not useful to me — this stuff should exist outside of my head. The things inside my head are the concepts I am creating and expressing, the rules should be clear enough that they never distract from creation.

.@Bortels wrote:

But the first time Joe learner is caught outside of the IDE, they’ll be largely helpless - very little memorized, and they don’t even know for sure where to go to look for answers, because their learning environment always held them by the hand.

Bret Victor is actually thinking a little ahead here. I think he understands that a lot of his concepts aren’t applicable to existing languages and technologies. His argument is that our whole way of thinking is outmoded.

His point is that we should never even consider building a system, or a data structure, that doesn’t encourage visual use. He doesn’t want us to think about how we can apply these concepts to Javascript, he wants us to think about how we can apply these concepts to creating in general. He is targeting programming because it is so held-back by this way of thinking right now.

In his world, a language and an IDE are one and the same. One doesn’t exist without the other (e.g., Smalltalk). To him, the only things that should exist inside your head are the things you want to create — anything else is noise to be eliminated. Victor believes that computers are capable, and should deal with all the rest. We just need to be smart in our approach to tool development.

That said, Victor’s article is focused on teaching. So you can see why he gets caught up in designing an IDE that holds your hand. However I am most certainly a believer in the same philosophy as him: to get what’s in your head onto the screen in the most efficient manner possible. I’ve been interested in this way of thinking for many years now.

Bah, sorry, I don’t know why I keep referring to the article as a video. Perhaps because I am tired and/or delirious at the moment. I read the article, not watched the video. Maybe I kept referring to it as a video because of all of the little videos embedded in the article. Maybe I’m just brain damaged. Anyhoo, good story! :slight_smile:

That may explain the disconnect - he wasn’t talking about teaching programming then, he was talking about teaching a nonexistent ideal of programming.

I would suggest a 100% required first step is to make a usable language that fits his criteria. I suspect such an effort is doomed to failure.

Everyone here fluent in a computer language knows that the real difficult parts have zero to do with the grammar and API, generally. Yes - you can have issues with those, but that’s paperwork. I think he’s focusing on the wrong thing.

By way of comparison - to deeply analyze the novels of Flaubert, you gotta know French, and if you don’t know it, you gotta learn it. And the process of learning French will teach you very little about “Madame Bovary” - it’s simply a necessary prerequisite. “Learning French” and “Learning about characterization in Madame Bovary” are two completely different things. (Disclaimer - I neither know French, or have read “Madame Bovary”, even translated - I just like saying “Flaubert”)

Andrew mentioned quaternions earlier - for me, a common stumbling block beyond language is the MATH, which is arguably a second-order part of the language (or, perhaps, “the language from which all other languages derive”). In 3d programming, there is a problem called “gimbal lock” that arises when you use Euler angles (yaw, pitch, roll) - it happens in the real world with a compass gimbal as well. Using quaternions is the “right” solution to this, and that fact transcends language - the 3d programming solution has absolutely nothing to do with the underlying language. But - you can’t pretend to understand them (I’m still not sure I do) without representing them in a language, even if that language is “mathematics”. Languages are a necessary pre-requisite to be able to move on to the higher-level thought processes.

There is a school of thought that I’m not sure I agree with, 100%, that suggests the development of a language was a necessary pre-requisite for higher level human activity, ie. civilization. I dunno - maybe I just convinced myself.

Point is - he wants to teach “programming”, but he’s pointing out issues with “programming languages”, and I think maybe it’s not the same level of things. It’s as if he wants to teach about “civilization”, and is trying to so by complaining about “the english language”. About the closest I can think of to learning “programming”, as opposed to a language, are the only computer courses in college that I took that actually taught me anything useful - one was “data structures” (where we did things like linked-lists, B-trees, and so on - again, generic, above the level of a particular implementation), and the other was “Files and Databases”, which could be language-independent, but was very grounded in the real world when-the-rubber-hits-the-road issues of performance and so on.

And - perhaps that’s my real issue. He is perhaps TRYING to talk about that 2nd level of programming - and mistakenly using the first level of language as an example, and a lot of what he says simply does not apply there. I should perhaps look again with that light.

One other thing - I understand you don’t want to have to memorize an API - nobody does. But there’s a big difference between being familiar, and memorizing details. If I provide you with “ellipse(100, 200, 20)” - you KNOW the numbers describe the ellipse being drawn - if you have to look up which is what, ah well. They key to making good use of an API isn’t in the trivia - it’s in knowing what’s there. So long as you can say “I want to draw an oval, and I recall the API had an ellipse function - and I know where to look to see what the parameters are”, I’d argue you know the API well enough. The javascript example presented again mixed up the levels; you don’t need to memorize which parameter is which - you need to know where to find out. And that’s nothing he touched on. better, perhaps, then pretty moving graphics, would be a popup of the API docs - something you could bookmark, something with a “back to top” pointer, something you can browse.

I would argue that most knowledge of computer language and API isn’t memorizing details - which parameters mean what in what order - it’s knowing what’s there. The rest is paperwork.

.@Bortels Absolutely, I think his points generally align with yours.

For example, what you refer to as “knowing what’s there” he calls “dumping the parts bucket onto the floor.” He is saying make your tools obvious and easily explorable. That’s never a bad thing.

About memorising the API, Bret Victor believes it should either be encoded in the language (drawEllipseAtX:0 Y:0 withWidth:100 andHeight:100) or easily inferred. I generally agree with this. After a few years of using Objective-C, I am convinced having the argument names right up-front is a good thing. I don’t believe languages need to be designed to do this, though — I’m experimenting in this area with Codea.

As Bret Victor says at the start of his essay: the features are not the point.

This essay will present many features! The trick is to see through them – to see the underlying design principles that they represent, and understand how these principles enable the programmer to think.

He is absolutely trying to talk about a higher-level of creative tools by applying feature examples to existing tools. Sometimes those examples don’t always work (or you don’t always agree with them), but the ideas he is trying to describe through these examples are pretty sound.

re. named arguments - I actually thought of that as a possible “solution”, but there are practical issues, and it largely just “pushes back the issue”. Was that “withWidth”? or “width” or “Width”? or are the axes X and Y, not width and height? You’ve traded one set of things to memorize or look up with another (albeit hopefully easier to remember). Or maybe you take both “width” and “Width”, but what if I pass both? which overrides? There are enormous practical issues with named parameter passing, which is why it’s fairly uncommon (I pass “width” but you’re looking for “Width” and the call looks good but is failing and I’m right back to looking it up in the API). Apple and Objective-C perhaps being an exception, but remember - Apple doesn’t care about the developer or about making your job easier. Sad, but true.

Now - named does help with readability - perhaps, slightly, for those unfamiliar with the API. I’m not sure those are my target audience. (my target audience is me 6 months from now trying to remember what I did - and for that, I use comments)

But - remember I mentioned I learned on the Atari 400, with the membrane keyboard? It has that, at least, in common with the iPad, and I’d suggest the following rule trumps named arguments in this environment:

eschew verbosity

You know what? “ellipse(10,20,50)” pretty clearly tells me we’re drawing an ellipse, and if I want to know what those parameters mean, I can tap the “eye” to bring up help. (Uh - and I just tried it, and it’s not context sensitive? wah. Shows you how often I try to look up an API function). What I don’t want to do is type “(drawEllipseAtX:0 Y:0 withWidth:100 andHeight:100)” on my iPad, ever, even with autocomplete. (Side Note: CamelCase is a pain. I would argue “drawEllipseAtX” yadda yadda has the same semantic content as “ellipse{x=5, etc”, but doesn’t force me to hit the shift key a gazillion times)

I would also say that sticking to lua trumps that as well; lua has a syntax for named arguments, even if it is a bit “hackey”… there’s nothing that says both “ellipse(args)” and “ellipse{width=10, height=20, angle=5}” [1] can both be legal. But - even with a named argument API, I’d keep the names as short as possible to disambiguate, owing to the platform (ie. I’d strongly consider X and Y rather than height and width, simply because if your angle is 90, they swap anyway).

[1] For readers unaware of the syntax: http://www.lua.org/pil/5.3.html - Now THAT’S learning!

Bortels (welcome back! We’ve missed you) has said most of what I wanted to say and more. The article is wrong with its basic assumptions. Let’s start at the beginning.

Here’s a trick question: How do we get people to understand programming?

Agree. It’s a trick question. The real question should be: How do we get people to understand computers?

Stuff about the Khan Academy

  • Programming is a way of thinking, not a rote skill. Learning about “for” loops is not learning to program, any more than learning about pencils is learning to draw.

Of course learning about pencils is learning to draw. It’s absolutely amazing to see what a real artist can do with a humble pencil because they’ve learnt all about its capabilities and limitations. A calligrapher can produce an amazing piece of work with some ink and a stick of wood, but only because they really understand pens and so know how much a stick of wood is and isn’t like a pen.

Yes, you can get a long way in drawing without understanding pencils. But you’ll get a whole lot further if you do. Of course, you don’t need to start by explaining all about graphite and so forth, but you sculpt the environment so that that information seeps in by itself.

But what does “is a way of thinking” actually mean? It’s just newspeak.

People understand what they can see. If a programmer cannot see what a program is doing, she can’t understand it.

Rubbish. People understand when they have done it lots of times. Can you see a quaternion? No, but you understand it once you’ve used it lots of times and seen what it is for.

Thus, the goals of a programming system should be:

  • to support and encourage powerful ways of thinking

See my remark above. This is so generic as to be worthless.

  • to enable programmers to see and understand the execution of their programs

Again, I object to the word “see”. Why do I want to see it? If I could see it, I probably didn’t need the computer to do it.

Stuff about live-coding

Alan Perlis wrote, “To understand a program, you must become both the machine and the program.” This view is a mistake, and it is this widespread and virulent mistake that keeps programming a difficult and obscure art. A person is not a machine, and should not be forced to think like one.

Why is it a mistake? Programming is telling a computer what to do. Ever tried telling someone or something what to do? You need to enter their world and understand how your instructions will be interpreted.

Here’s my fun anecdote on this. My kids wear glasses and have done for most of their lives. Once when she was about 2 my daughter lost her glasses somewhere in the house. We spent a long time looking for them and every five minutes were asking “Where did you put your glasses” which just got a “Don’t know” response. Eventually, I tried a different tack: I told her “Put your glasses on” whereupon she went to where she’d left them and put them on.

My goal is always to get the computer to do what I want it to do and not the other way around. But to do this, I need to understand how the computer will react to what I tell it and for that then yes, I need to “become the machine”.

How do we get people to understand programming?

We change programming. We turn it into something that’s understandable by people.

Whereupon it no longer is programming. Not that “being understandable by people” and “getting a computer to do what it is told” are incompatible, but if you focus on the first exclusively you are in danger of losing the second.

There are lots of vague statements in that first section with nothing to back them up. I can’t say for sure that they are out-and-out wrong, but with no evidence to say that they are right - and a huge Baysian prior against it - I’m not convinced.


When I say, “I want my kids to learn to program” what do I mean? I’m not totally sure. Here’s what I think I mean:

  1. I had a lot of fun as a kid programming my BBC Micro and CBM64. My kids are a bit like me and would probably enjoy it as well.

    But the environment had a lot to do with that as well. My dad built stuff for us to control from the CBM64 so we could write programs that interacted with our toys. How easy is that these days? Also, as I mentioned, I subscribed to programming magazines where I could copy out some code and type it in. Nowadays, you’d download it off their website (I was going to say, you’d get a disc attached to the magazine, but then realised that not even that is true anymore).

    How do we recreate the effect of that environment now?

  2. I’m having a lot of fun programming now - thanks in no small part to Codea. I’d like to be able to share that time with my kids.

    This is where it gets difficult, though. I’m having a lot of fun because I know the basics of programming. They don’t. So at the moment, the sharing is along the lines of me writing a program that they play with (Anagrams, Hangman). I’ve tried writing a program with my eldest, but it didn’t get anywhere because … wait for it … he doesn’t have a clue what is possible with a computer because he doesn’t understand computers.

  3. I think that if they learn to program at a young age it will make it easier for them to learn mathematics later (Whoa! You didn’t spot that one coming, did you? See What does mathematics have to do with programming? for more details), and will give them a very valuable skill for later in life.

    With this, then what I really want them is to understand computers: namely, what computers are good for and what they aren’t good for. When it’s a reasonable task to give to a computer and when it’s better to do it yourself. Then, I’m sorry to say, “for loops” are everything.

So my learning objectives are simple, I just don’t - at the moment - know how to meet them.

  1. That the student learn that a computer is a tool, and be familiar with its uses and limitations.
  2. That the student have a lot of fun doing it.

Different point (so different post). Simeon made a point about dictionaries and being able to look up words easily and how that enriches his language skills. Sure, but you’re talking about a language that you already know. Ever tried learning a new language from a dictionary? It can’t be done. You need a native speaker around to tell you that the correct word for that concept in that situation is this and not the other. Dictionaries don’t tell you that. Dictionaries are for when you already know the language.

.@Andrew_Stacey interesting that you bring up mathematics — one of Bret Victor’s passions is to rethink how maths is taught. Have a look here: http://worrydream.com (his articles under “kill math”). It would be interesting to know what you think.

My point was not that you learn a language entirely from a dictionary, it was that having an inline dictionary stops you from having to memorise vocabulary. Same way that I don’t memorise every API I use. I agree with Bret Victor on this — rote memorisation is not something you want occupying your mind when you have far more interesting things to fill it with.

I’d also disagree that learning about pencils has much to do with learning to draw. Learning to see is more important, and learning to see the pictures in your mind is one of the hardest things I have ever had to learn. The technical process of transcribing the image in your head onto paper is pleasant, and can be assisted by tools, but it doesn’t happen to your satisfaction unless you have learnt how to visualise things.

Programming is a way of thinking: it involves abstraction, encapsulation, connections, logic, spatial relationships and a whole bunch of other problem solving abilities. Bret Victor’s point is that thinking like a computer (i.e. simulating the instruction pointer in your head) should not be one of those requirements. If you use the former to control a computer without requiring the latter, you are doing a more efficient job (in terms of expending useful mental effort).

Most of my programming now takes place in my head, away from the computer. I’m thinking about data structures, connections, algorithms. Problem solving. Most of the hardest problems end up being solved by drawing pictures and visualising. If I didn’t then have to go and write the code to translate those solutions into a form the computer understood, I would be very happy.

It’s the same with drawing — if I didn’t have to physically draw to make the things I see appear in the world, I would use that instead.

Anything that makes it easier and faster to get an idea from your head and into reality is a good thing.

In the end that’s all it’s about, expressing your ideas. Bret Victor believes that there is a better way to do that than the technical skill of programming — his attempts are still a bit nebulous, but I feel like it’s something worth exploring.

.@Bortels it’s only context sensitive if you select the word “ellipse” and choose “Lookup” from the context menu.

Initial spark

Bret Victor shows off some really fancy IDE cababilities for interactive computer-aided programming where you can understand everything without prior knowledge, just by doing and exploring.

Proposal

Codea should adapt as much of Bret’s ideas as possible.

My take on it

We cannot change the way Lua looks and works (should we if we could?) and transforming Codea into a Scratch-like IDE (which was proposed in another thread) is something completely different (and shouldn’t be done).

So we’re talking about the wide field of computer-aided programming. Why isn’t Codea more like the things that Bret Victor shows us and how far can Codea get on a path to it? Well, I don’t know exactly what Bret has shown. It is either a fully capable version of the IDE that probably works with small and very specific tasks or it is a special purpose IDE for every single example. I’m not sure, but that’s what I think. Because doing it in a universal way is a lot of work, and if you are not a programmer you’re maybe not even realizing how much of a work it is.

The easiest way of computer-aided help is probably code intelligence. You type something, the IDE is guessing what you are going to type and suggests completions. You’re opening a bracket, the IDE shows parameter lists it knows from scanning existing code and again makes suggestions. If it knows the types of variables it can also suggest specific variables. Simeon already promised to improve the way of getting help from Codea which is a very good (and doable) thing.

Interactive editing is not so easy. You have draw a scenery with a flower and now the knob of the petal color. How does the IDE know what to change? The best guess is to re-run everything. This may work for some isolated code if it is simple and self-contained. Or it is part of a large code sequence but it is save to just re-run the function you a playing with at the moment. Or not. What if you change the position of the petals? The background must be redrawn or else you will fill the screen with old impression of outdated drawings.

Instant code replay (how to name it) is equally difficult. I mean the punch card where you can point to an instruction inside a loop and see the intermediate result of it in time a space. How is the IDE going to do that for you? I mean: from Simeon’s point of view who needs to code it?

See, what Bret has shown us is a presentation of design principles, not a fully working product, because it would be a lot of work to get it blown up to the size of Codea.

Interlude: While talking about Bret Victor, there’s Brett Terpstra coming into my mind (because of the Christian names; silly, isn’t it?). He’s writing a lot of scripts to help himself (and others) doing his work. Why is he writing scripts? Because it is enough for him, and because writing a graphical tool that everybody can use without even reading the first chapter of the manual would not make him Brett Terpstra, the guy who probably writes ten tools a day to improve his workflow and then tells you about the best one of them. He would be Brett Terpstra, the once a month super-polished tool presenter. End of interlude.

I appreciate any help a computer can provide, but if I had a wish list of Codea’s future capabilities and adjust it by Simeon’s limited time, then only very few of Bret’s ideas would make it into the “realistic” edition. Nevertheless, Simeon may try and surprise us with things nobody did before.

(This bit is somewhat silly.)
Learning to see is easy. I took a book away with me on holiday when I was 13 and after a month I’d learnt to see and could draw. It’s learning to make something new that is the interesting bit, and that’s where you really need to understand the capabilities of your tools.

In what you (@Simeon) say, there’s still a bit conflation between initial learning and onward development. Bret’s article reads like “Here’s how to help someone who has no idea” whereas what I get from what you write is “Here’s what would help someone who is some way along the road.”. Then, dictionaries and so forth help. But first you have to learn how to use them. The peril of Wikipedia is not that it is there, but that it makes us lazy. If we knew how to use it properly - read, if we knew how to teach how to use it properly - then we wouldn’t have all this angst about students “just” quoting from Wikipedia.

Maybe I misunderstood something. “thinking like a computer (i.e. simulating the instruction pointer in your head)”. That isn’t what I mean by “thinking like a computer”. I mean thinking about data structures, control instructions, and so forth. That’s “thinking like a computer”.

But here’s where I really, really, really disagree:

If I didn’t then have to go and write the code to translate those solutions into a form the computer understood, I would be very happy.

It’s the same with drawing — if I didn’t have to physically draw to make the things I see appear in the world, I would use that instead.

Anything that makes it easier and faster to get an idea from your head and into reality is a good thing.

In the end that’s all it’s about, expressing your ideas.

This makes me quite sad. It reduces the medium of expression to a secondary role. If I draw, then my choice of charcoal versus pencil versus pen versus iPad makes a huge difference to what I draw and how it will turn out. Similarly, it’s only when I write a proof down on paper that I really, truly have proved it. I have all manner of “proofs” zooming round in my head, but only when I make them concrete do they actually coalesce into something real. That process of getting it down on paper is not nothing. That’s the “pushing the paint around on the canvas” part. That’s where the vague and hazy idea becomes a reality, and that’s the magic.

As for rote learning, well I will admit that when Bret said something disparaging about learning times tables (at least, I read it that way) then my tolerance level went down quite a bit. One fun thing about watching everyone play with Codea is seeing how much maths everyone is using (and seeing how many “I don’t know any maths” comments there are!). How much easier would life be if everyone here remembered their times tables? Or their trigonometry relations? SOHCAHTOA anyone?

First off, thank you all for the great discussion! Good stuff. :slight_smile:

There are so many points to respond to that I think I’ll just try to address a couple of specific ones. It is my perspective that the computers and technology works for US, not vice versa. I am not a parser, nor a compiler, nor a memory manager (not recently anyways). There is a great benefit to understanding how things work at the lower level, but if I am trying to get something done, the underlying mechanics of the thing should not impede me focusing on my capturing and implementing the actual concept I am working on. We do not, by in large, spend a lot of time futzing with our pens or mechanical pencils trying to get them properly configured to write something. We grab them and we write with them.

I started with basic, moved to machine code (didn’t have an assembler), Assembly, C, C++, Perl, Java, C#, PHP, Python, Jython, Ruby, Groovy, ObjC and Clojure. I spend most of my time writing in Python nowadays and I have absolutely do not miss the other languages. If I need performance for something that is CPU bound (most work I do now is IO bound client/server stuff) I will write some Java.

Most of the difficulty I ever have is learning the damn frameworks for each language platform. Each platform has differences that really do not go hand in hand with the general concept of knowing how to program. Any tools that help inform me about the specific of platform frameworks save me time and it is not NECESSARY for me to have have memorized .NET GDI or DirectX call, or Swing (Gridbag!), Spring.

If I switch to do some Java programming, help me remember the UI methods and parameters so I can get the damn thing done that I am trying to do. My goal is not to “Learn Java Swing”, my goal is “develop a front-end for some analytics tools”. I view these activities largely as “Yak Shaving”, any apparently useless activity which, by allowing you to overcome intermediate difficulties, allows you to solve a larger problem.

I guess the genesis for the whole discussion from my part is making it easier to discover functionality available to me with a new platform/tool, such as Codea. Once I get the hang of the specific dialects, its easier to remember. If I can’t even get out of the starting gate because I don’t know how someone else has categorized a concept, its less useful to me. Affordances are helpful there, don’t get in my way and make it easy for me to see what tools and functionality are available to me.

.@Codeslinger, I like your breakdown, but I need to clarify a misstatement of my original reason for posting this. I’ll quote myself here:

Xavian Said:

All this in mind, I came across a fantastic article tonight posted on Hacker News that is > directly relevant to reaching goals of making modern programming easier to learn and > there are several suggestions directly applicable to Codea itself:

http://worrydream.com/LearnableProgramming/

There are several suggestions directly applicable to Codea itself, which I outlined as:

“Make meaning transparent” and “Explain in context” sections

I am not advocating the stance of:

Codea should adapt as much of Bret’s ideas as possible.

As many people have discussed, a lot of what he is saying is difficult if not impossible to apply to existing approaches. Its the underlying concepts that are most interesting.

If I were go back and state my observation more simply, I might just say:

I like Codea, its pretty easy to use and my only minor nitpick is that I wish Codea’s context sensitive documentation feature were a little better.

In any case, this spurred a great discussion and I have a much better understanding of other folks perspectives. I think there was a lot of reaction to the concept of adopting all of Bret’s approaches and how doing that could cause other development items on Codea to grind to a halt or lessen Codea’s appeal to existing folks.

Thanks for the copious and detailed feedback! :slight_smile:

-Xavian

Just read Kill Math. Sure, kill whatever that is, it sounds horrible. But it ain’t what I do, nor what I try to teach.

.@Andrew_Stacey

The peril of Wikipedia is not that it is there, but that it makes us lazy.

Students quoting from Wikipedia is not really relevant to what I had in mind. The fact that Wikipedia exists means that I can offload things from my head — I know that I can reference them when the time comes. This frees up my mind for real problem solving.

There is the stuff inside your head and the stuff outside. I would prefer, as much as possible, for the things not directly involved with creative expression to exist outside my head.

This makes me quite sad. It reduces the medium of expression to a secondary role.

To me the medium doesn’t matter as long as the expression is consistent with what is in my head. The tools are there to make it happen. Sometimes I find I can’t get the idea out of my head fast enough, the tools feel like a limitation at that point.

As for pushing paint around canvas — I do that too. Sometimes you randomly draw or paint and then your visual system takes over and you end up constructing a model from your random shapes, lines and whatever emotions you were feeling at the time. It’s a great experience, but I still place the importance on visualising.

Edit: In the above case, where the medium can help you experiment and trigger you into discovering good ideas, I agree that it’s important.

Maybe I misunderstood something. “thinking like a computer (i.e. simulating the instruction pointer in your head)”. That isn’t what I mean by “thinking like a computer”.

I don’t consider design, data structures, abstraction, and spatial problem solving to be “thinking like a computer.” I find that I apply the same mental tools when forming an argument, solving a puzzle, building a model, and so on. Those mental tools should be used often and well.

By “thinking like a computer” I refer to when you observe a code listing and mentally simulate what happens when the code runs (most often without realising it). The argument is that the computer should do this for you, immediately and often.

That’s where the vague and hazy idea becomes a reality, and that’s the magic.

I agree with this. And I don’t think anything that Bret Victor wrote goes against this idea. Transcribing an idea is extremely important, but if you can do it faster, more directly, and more efficiently, then I think you should do that.

Because then you can move onto the next idea even faster.

My wife has been a public school teacher for 25 years, is very skilled at her work, and recognizes that different people need very different approaches to understand and make use of information. There is no perfect approach that works for eveyrone. One person’s inspiring insight, is another’s plodding frustration. The tool or technique that looks glowingly obvious to you, may be a muddy mess to someone else.

Speaking for myself, I like nothing more than examples. If I can find an example that’s close to the problem at hand, I can twist it, turn it, learn from it, and build what I need. If I’m thrown back on just the bare documentation, things move very slowly.

Give me examples, and I’ll move… maybe not the world, but at least a few pixels.