Search Blogs

Thursday, June 15, 2023

Useful Linux Command Line AI Tool

I've been looking for a Linux command line tool that leverages AI to streamline my terminal experience. There are so many instances where I forget a command or what to do some convoluted sequence of shell commands where it takes me a whole lot of time to figure it out. This is way easier now with LLMs but moving back and forth between the terminal and browser isn't the smoothest and the browser doesn't have access to my local device. Although letting an AI system run this on your local device is pretty scary when you think about it, I mean imagine if you give it a prompt and it incorrectly interprets the meaning and ends up deleting important files and the backup you may have for those files. For this reason, make sure you have cloud or isolated/disconnected backups.

Initially, I came across Warp, which seems extremely powerful and has mostly what I want, however, there is no current Linux version. Then I found Fig.io but couldn't get this to configure properly and it seems the AI tool is only special access. So I kept searching and eventually found Yai. The thing I like about yai is it was easy to install/configure,  
curl -sS https://raw.githubusercontent.com/ekkinox/yai/main/install.sh | bash
This command will install Yai for you. Then to initialize and use just type:
user@system:~$ yai
 
in your terminal and it will prompt you for your OpenAI key. If you want to specify any types of details related to OpenAI model or pre-prompt context, you can modify the file below,
user@system:~$ emacs .config/yai.json
Once everything is configured it's pretty cool what you can do. There are two options, the first is to run yai in execute mode with the flag -e which will generate shell commands and any other text/code based on the prompt you give it. You are then prompted to whether you want to run the command, you should review what yai shows to make sure you don't do any harmful operations. The other mode is -c which is used yai in a chat-based mode. This is good for general inquiries. At the moment yai doesn't have internet access itself and is limited by the OpenAI model token context size. This means you can't just pipe to it any arbitrary sized document or code to help you understand it. To get around this you can use commands like head.

It would be nice if yai used vector stores and other tools (e.g. similar to Langchain) to enable questions and answering of your documents and code. I would take a stab at it but the problem is yai is written in the Go programming language which I have absolutely no experience with. The tool is still very useful though as I'll show below.

A somewhat complicated example

First, let me give some context about what I'm doing with this example. I want to ask yai to create a LAMMPS simulation script and then use the Linux task spooler utility to schedule the simulation job. So here is the prompt I gave it (after a few iterations, a little on that later) was:
yai -e " I want you to create a folder in my home directory called simple_job; if the folder exist just delete it. In the folder please create a LAMMPS molecular dynamics script using the lennard-jones potential for Argon gas in a box with lengths 20 angstroms (ex. region 1 block 0 20 0 20 0 20), be sure to create 100 randomly placed Argon atoms in the box  and set mass. You can use a NVT ensemble and set the temperature to 300 Kelvin and run the simulation for 1000 timesteps. Make sure you validate all the LAMMPS commands you use. After creating the LAMMPS script, I want you to use the task-spooler, command 'tsp', to run the command to run is 'mpirun -np 1 /opt/lammps/23Jun2022/build/lmp -sf omp -in {NAME_OF_SCRIPT}' where {NAME_OF_SCRIPT} is the name you used for the LAMMPS simulation script you created in the folder simple_job."
So what did this give in terms of the commands? At first it looks a bit difficult to go through and what you see is that it strings all the commands together and for the script it uses echo. Here is the output:
`mkdir -p ~/simple_job && rm -rf ~/simple_job && mkdir -p ~/simple_job && cd ~/simple_job && echo -e 'units lj      
  atom_style atomic                                                                                                   
                                                                                                                      
  lattice fcc 0.8442 region simbox block 0 20 0 20 0 20 create_box 1 simbox create_atoms 1 random 100 12345 simbox    
  mass 1 39.948 pair_style lj/cut 2.5 pair_coeff 1 1 1.0 1.0 2.5 velocity all create 300.0 12345 fix 1 all nvt temp   
  300.0 300.0 0.1 thermo 100 thermo_style custom step pe temp press run 1000' > simulation.lammps && tsp mpirun -np 1 
  /opt/lammps/23Jun2022/build/lmp -sf omp -in simulation.lammps`                                                      

  Create the folder simple_job, delete it if exists, and create the LAMMPS script with the desired parameters. Then use 

  confirm execution? [y/N]
Is this correct, does it work? The LAMMPS commands don't look like they are on newlines but indeed they are when you look at the final file produced. Did it run, it did! It submitted the task to tsp and the LAMMPS script actually ran. This is pretty cool if you ask me, especially if you want to create a draft template simulation folder/file setup.  Below is the simulation result visualized which seems to run without any wonky behavior even though the pair potential coefficients are not correct. I had to manually add the dump command but I could have specified this in the prompt.

Simulation result from a prompt given to Yai.

Caveats

The example above didn't work smoothly on the first try, well yai did always carry out all the steps, however, the LAMMPS script regularly contained errors. This is do to the GPT models' limited knowledge of LAMMPS so it's not an issue with yai. This is where some kind of vector store or other tools to help yai know about the details of the commands it's going to use or code/script it will produce. If I could have told yai, "Be sure to review the LAMMPS documentation so that you use the correct syntax, commands, and arguments" this probably would have helped. I think it took me about 5 iterations to get the prompt to work. The changes I usually had to make were related to the details of the LAMMPS script.

Where to go from here

You can see that these types of tools are going to change how computational work is done. Its going to improve efficiency and ease of use for complex sequences of steps that are typically done when computing. My hope is that these tools only get better and can incorporate documents or other resources as part of the generative output.

Reuse and Attribution

No comments:

Post a Comment

Please refrain from using ad hominem attacks, profanity, slander, or any similar sentiment in your comments. Let's keep the discussion respectful and constructive.