@JakAttak You edited your post.
Getting off-topicā¦
I decided to test this. I made a test program which created a 10,000 size table, with the contents being {1, 2, 3, ā¦, 10000}. I then had 2 functions, one of which checked the size of the table using table.maxn(), the other using #, and then doing this (redundantly) 30,000 times each, and then printing whatever the size it found was.
The results areā¦ Staggering. table.maxn(), checking the length of the table 30,000 times then printing it took 56.3948 CPU secondsā¦ The exact same test, using # took only 0.0270844 CPU seconds.
As a note, when I increased the length of the table both table.maxn() and # took longer to do - of course, table.maxn() by a much higher amount.
Edit: Even after forcing lua to cache the table.maxn function, it still took a huge amount of time.
Edit 2: After some more experimentation, table.maxn() and # have different behaviours. After making a table alike to: {1, nil, nil, nil, nil, nil, nil, nil, nil, 10}, # counted the table length to be 1, while table.maxn() counted it to be 10.
So, in conclusion - if your table includes nil values, use table.maxn(). Else, use #. The speed difference is irrelevant unless you are counting huge arrays tens of thousands of times.
The exact behaviour of # appears to be that it returns any integer when
table[#table] ~= nil and table[#table + 1] == nil
This means that inserting something after the end of a table can actually make # report a lower length than before inserting!