An idea about to train a custom model(like chatGPT, but the size is smaller)for codea program

An idea about to train a custom model(like chatGPT, but the size is smaller)for codea program.

Hi guys, how about to train a small LLM can be deployed locally?
It can program with Codea.
It will make it easy to write any Codea app.

This is actually a really interesting project that does exactly that, in this case it fine-tunes an open source language model for the Godot scripting language:

Thank you, I will study the repo.

Here is another site, it is a 7B model, and can write some Lua programs(It can not use Codea, I think it need to feed some Codea knowlage). You can try to chat with it:

I am newbie of LLM, so I am learning how to use it, It is very interesting.

And another idea: train a LLM as a Codea document assistant

We can chat with it to ask some detailed tech question, and it can give the correct answer and some samples.

For example, assume I do not know how to use Codea4, I know a bit of Codea4, but not too much, I want to create a scene, and set the sky, below is the chat:

I: Pls tell me how to create a scene in Codea4, with a blue sky.

Bot: Ok, you need use function scene() and … balabala…here is an sample:

scn = scene()
scn.sky = ...
...

It will make it very easy to use Codea.

If we could get an LLM running locally on iPad it would be really nice to integrate. As it stands, we might as well just implement OpenAI’s API if it’s going to have to ping a server for responses

This project can run on iPad/iPhone, but need the newest(big memory) ,my old iPad can not run it. By now it support vicuna-v1-7b

https://mlc.ai/mlc-llm/

IMG_0580