Forgive me if I don’t phrase this quite right - I’m not a programmer.
I’m defining a class which I expect to use in lots of different programs. Part of the point is to have a quick way of defining particular instances of this class. Now the full range of those particular instances is quite large (of the order of several hundred). In any one program I don’t expect to use more than a dozen, but I want to be able to pick and choose from the whole range.
I have a few options as to how best to do this. The brute force method is simply to define the class and then define all the instances. This makes it easy to use, but means that I’ve defined a whole load of stuff that I’m not going to use. An alternative is to group them into families and define a routine that, when called, defines all the instances in that family. This means an extra hoop to go through in the setup routine, but keeps the number of instances down a little. A third option is to make this even more extreme and define a routine that defines the instances according to a list passed by the user.
Ultimately, I guess I could program some variant of all since they aren’t mutually exclusive (well, I guess the first makes the others redundant but a minor variant of it wouldn’t). So my question is really as to whether I should care about this too much. Is there any significant performance hit from loading a load of objects that don’t get used?
Each particular object isn’t very big, just a list of about 4 values, but as I said there are several hundred of them!