Assignment 4: Stanford Shell

Most of this assignment was written by Jerry Cain, with some additions by Ryan Eberhardt.

Kudos to Randal Bryant and Dave O’Hallaron of Carnegie Mellon for assignment inspiration and for parts of this handout. Huge thanks to Truman Cranor for writing the command line parser using tools you’ll learn in CS143, which you should all take someday because it’s an amazing class.

You’ve all been using shells to drive your conversations with UNIX systems since the first day you logged into a myth. It’s high time we uncover the shell’s magic building one, almost from scratch, to support process control, job lists, signals, pipelines, and I/O redirection – all while managing the interprocess concurrency problems that make a shell’s implementation a genuinely advanced systems programming project. There’s lots of neat code to write, and with your smarts and my love to guide you, I’m confident you can pull it off.

Due Date: Thursday, July 29th Friday, July 30th, 2021 at 11:59 p.m. PT. If you submit by Thursday 11:59 p.m., we’ll give you a 3% bonus.

All coding should be done on a myth cluster machine, as that’s where we’ll be testing all assign4 submissions. You should clone the git repository we’ve set up for you by typing:

git clone /usr/class/cs110/repos/assign4/$USER assign4

Doing so will create an assign4 directory within your AFS space, and you can descend into that assign4 directory and code there. There’s a working sanitycheck for this assignment, and your repo includes a soft link to a fully functional solution. When in doubt about how something should work, just run my solution (which can be found at ./samples/stsh_soln) to see what it does and imitate the solution.

Opening notes

I personally think this is the hardest assignment of the quarter (others might disagree), but it is also my favorite assignment of the quarter. This is how real shells work – you can clone bash’s source code or zsh’s source code and see very similar patterns to what you’re writing in this assignment.

The starter code is a functional shell, which you are asked to extend. It’s worth compiling the starter code and playing with that shell to see what it can currently do. It’s also worth running the sample solution and spending some time with that to see what you’re asked to build. As you’re working on this project, test very often to ensure you haven’t broken anything!

It won’t take you nearly as long to figure out the starter code as it did with Assignment 2, but you should still expect to spend some time figuring out pipeline and command and friends.

You should know your way around your own shell before building this one. You should be familiar with the following things:

Builtin stsh Commands

Your Assignment 4 shell needs to support a collection of builtin commands that should execute without creating any new processes. You might want to run the sample solution and play around to see how to interact with these builtins.

The required builtins are:

quit, exit, and jobs are already implemented for you. You’re responsible for implementing the others and ensuring the job list is appropriately updated.

Getting Started

Inspect the stsh.cc file we give you. This is the only file you should need to modify. The core main function you’re provided looks like this:

int main(int argc, char *argv[]) {
    pid_t stshpid = getpid();
    installSignalHandlers();
    rlinit(argc, argv); // configures stsh-readline library so readline works properly
    while (true) {
        string line;
        if (!readline(line)) break;
        if (line.empty()) continue;
        try {
            pipeline p(line);
            bool builtin = handleBuiltin(p);
            if (!builtin) createJob(p); // createJob is initially defined as a wrapper around cout << p;
        } catch (const STSHException& e) {
            cerr << e.what() << endl;
            if (getpid() != stshpid) exit(0); // if exception is thrown from child process, kill it
        }
    }

    return 0;
}

The readline function prompts the user to enter a command and a pipeline record is constructed around it. readline and pipeline (which is a different from the pipeline function you implemented for Assignment 2) are implemented via a suite of files in the stsh-parser subdirectory, and for the most part you can ignore those implementations. You should, however, be familiar with the type definitions of the command and pipeline types, though, and they are right here:

const size_t kMaxCommandLength = 32;
const size_t kMaxArguments = 32;
struct command {
  char command[kMaxCommandLength + 1]; // '\0' terminated
  char *tokens[kMaxArguments + 1]; // NULL-terminated array, C strings are all '\0' terminated
};

struct pipeline {
  std::string input;   // the empty string if no input redirection file to first command
  std::string output;  // the empty string if no output redirection file from last command
  std::vector<command> commands;
  bool background;
  
  pipeline(const std::string& str); // constructor that parses an input string
  ~pipeline();
};

Check out what the initial version of stsh is capable of before you add any new code.

Milestones

The best approach to implementing anything this complex is to invent a collection of milestones that advance you toward your final goal. Never introduce more than a few lines of code before compiling and confirming that the lines you added do what you expect. I repeat: Never introduce more than a few lines of code before compiling, testing, and confirming that the additional lines do what you expect. View everything you add as a slight perturbation to a working system that slowly evolves into the final product. Try to understand every single line you add, why it’s needed, and why it belongs where you put it.

Here is a sequence of milestones I’d like you to work through in order to get started:

  1. Descend into the stsh-parser directory, read through the stsh-readline.h and stsh-parse.h header files for data type definitions and function/method prototypes, type make, and play with the stsh-parse-test to gain a sense of what readline and the pipeline constructor do for you. In general, the readline function is like getline, except that you can use your up and down arrows to scroll through your history of inputs (neat!). The pipeline record defines a bunch of fields that store all of the various commands that chain together to form a pipeline. For example, the text cat < /usr/include/stdio.h | wc > output.txt would be split into two commands – one for the cat and a second for the wc – and populate the vector<command> in the pipeline with information about each of them. The input and output fields would each be nonempty, and the background field would be false.

  2. Add code to Stsh::createJob to get a pipeline of just one command (e.g. sleep 5) to run in the foreground until it’s finished. You’ll need to construct an argv array on the stack, copying in the command and tokens from the first command in the pipeline. Rely on a call to waitpid to stall stsh until the foreground job finishes. Ignore the job list, don’t worry about background jobs, pipelining, or redirection. Don’t worry about programs like emacs just yet. Focus on these executables instead: ls, date, sleep, as their execution is simple and predictable.

    Testing suggestion: Try running sleep 3. It should run for 3 seconds, and then the stsh> prompt should reappear after the sleep.

  3. Read through stsh-job-list.h, stsh-job.h, and stsh-process.h to learn how to add a new foreground job to the job list, and how to add a process to that job. Add code that does exactly that to the stsh.cc file, right after you successfully fork off a new process. After your waitpid call returns, remove the job from the job list by setting the process’s state to kTerminated and calling STSHJobList::synchronize. If it helps, inline cout << joblist; lines in strategically chosen locations to confirm your new job is being added after fork and being removed after waitpid.

    Testing suggestion: Add cout << "Joblist after adding process:" << endl << joblist; after adding the new process to the job list (before waitpid), and add cout << "Joblist before return:" << endl << joblist; to the very end of createJob:

    stsh> sleep 3
    Joblist after adding process:
    [1] 1965986 Running      sleep 3
    Joblist before return:
    stsh>
    
  4. Establish the process group ID of the job to be the PID of the process by investigating the setpgid system call. Every process runs in one process group, and group membership is inherited on fork(), so right now, stsh and all its children are running in the same group. However, it’s conventional to run all of the processes of a job in their own group, separate from the parent shell or any other jobs. This makes job control easier, since you can send signals to an entire group if you want to pause/resume/kill a job, and generally makes it easier to identify groups of processes working together.

    After fork(), in both the parent and the child (see Tips and Tidbits below), you should use setpgid to add the child to its own process group (e.g. a child with pid 123 should be added to group 123). Right now, the child will be the only process in the group, but we’ll add more processes later on when you implement pipelines of multiple commands.

    Testing suggestion: Open two terminals logged into the same myth machine (you can echo $HOST on one terminal you’re logged into, then ssh [email protected] from the other.) On one terminal, run ./stsh and run sleep 100 from within stsh. On the other, run the following ps command:

    🍉 ps o pid,ppid,pgid,stat,user,command -u $USER | grep "PGID\|sleep\|stsh" | grep -v grep
        PID    PPID    PGID STAT USER     COMMAND
    1274737 1274629 1274737 S+   rebs     ./stsh
    1274738 1274737 1274738 S    rebs     sleep 100
    

    Note that stsh and sleep are in different process groups (PGID column), and the PGID of sleep matches its PID.

  5. Add the ability to kill or pause a job by pressing ctrl-c or ctrl-z on the keyboard. If the shell receives SIGINT (ctrl+c) or SIGTSTP (ctrl+z) while a foreground job is running, it should forward the signal to the foreground process group. You can send a signal to a group using the killpg(pgid, signal) syscall. Although this will work the same as using kill since there is only one process in the group, it’s important to use killpg for later, when we have jobs with several commands (e.g. cat words.txt | sort | uniq | wc -l).

    To do this, you’ll need to block SIGINT and SIGTSTP to prevent the shell from being killed or stopped. You’ll also need to replace your waitpid call from milestone 2. Now, instead of simply waiting for the child to terminate, we need to wait for it to terminate or for SIGINT/SIGTSTP to come in, so that we can forward those signals to the child.

    The logic should look something like this:

    while the foreground job is running (see STSHJobList::hasForegroundJob):
        wait for SIGINT, SIGTSTP, or SIGCHLD
        if SIGINT or SIGTSTP came in:
            forward the signal using killpg()
        if SIGCHLD came in:
            use waitpid to get the status of the child
            update the joblist with the status
    

    At this point, you should also add extra flags to waitpid so that you can pick up on child processes that stop/continue. You should also be able to handle child processes that terminate unexpectedly (e.g. segfault).

    Some important notes:

    • You may find STSHJob::getGroupID to be helpful.
    • Signal handling configuration is inherited by child processes. You will need to unblock signals (SIG_UNBLOCK) to ensure that child processes can properly handle them.
    • When you forward SIGINT or SIGTSTP to a child, you should not assume that those signals will kill/stop the child, since the child could ignore those signals (e.g. vim does not exit when you press ctrl+c). You should only update the job list when waitpid tells you that a child stopped, exited, or continued.

    Note: We’ve uploaded a video overviewing some pieces of this milestone here. It may be helpful if you’re having some trouble putting this together.

    Testing suggestions:

    Try running sleep 5 in your shell, and press ctrl+c. Then run jobs to print the job list. The list should be empty, since sleep terminated. Try it again with ctrl+z. The job list should show sleep as Stopped.

    stsh> sleep 5
    ^Cstsh> jobs
    stsh> sleep 5
    ^Zstsh> jobs
    [2] 1279583 Stopped      sleep 5
    stsh>
    

    If this doesn’t work, think about all the steps involved in the process, and try to find ways to observe what is happening and confirm which steps are happening correctly:

    • Is your shell receiving the SIGINT/SIGTSTP? You can check this with some print statements.
    • If so, does it seem to be sending the signal to the right place? You can print out the killpg arguments and return value to make sure that call is working.
    • If so, does the child process seem to be receiving the signal and handling it using the default behavior? Run the ps command from the previous milestone. If you sent SIGINT, is the child gone from the list, or marked a zombie (Z in the STAT column)? If you sent SIGTSTP, is the child stopped (T in the STAT column)?
    • If so, what is your waitpid call doing? Is it returning the right PID? What are you seeing from WIFEXITED/WIFSIGNALED/WIFSTOPPED/WIFCONTINUED?
    • If your waitpid call is working fine, is the problem with updating the joblist? Try printing the joblist at different points to see.

    Also ensure that your code works for children that do not exit gracefully. You can test this with ./fpe 3, which sleeps for 3 seconds and then crashes due to a floating point exception:

    stsh> ./fpe 3
    stsh> jobs
    stsh>
    
  6. Make sure that if SIGINT or SIGTSTP come in while no foreground job is running (e.g. we’re displaying the shell prompt and waiting for user input), nothing happens. The shell should not exit or stop. Also, if you press ctrl+c at the shell prompt and then sleep 5, sleep should sleep for a full 5 seconds without immediately quitting due to SIGINT. If you have a print statement when calling killpg, that print statement should not appear.

    Keep in mind that if SIGINT arrives while it is blocked, it will be added to the pending set and delivered when sigwait is called. If the user presses ctrl+c while the shell prompt is displayed, that will add SIGINT to the pending set, and then SIGINT will be delivered in createJob. We don’t want that to happen. To avoid this, we can clear SIGINT and SIGTSTP from the pending set before starting the sigwait loop:

    // Tell the OS we want to completely ignore SIGINT/SIGTSTP. If these were
    // already in the pending set, they will be removed.
    signal(SIGINT, SIG_IGN);
    signal(SIGTSTP, SIG_IGN);
    // Allow SIGINT/SIGTSTP to come in again. Assuming these signals are still
    // blocked, they will be added to the pending set when they come in, and any
    // calls to `sigwait` will retrieve them.
    signal(SIGINT, SIG_DFL);
    signal(SIGTSTP, SIG_DFL);
    

    Note: We’ve uploaded a video overviewing some pieces of this milestone here. It may be helpful if you’re having some trouble putting this together.

    Testing suggestion: Start your shell, press ctrl+c, and then run sleep 3. It should sleep for a full 3 seconds. Separately, try running sleep 3 and pressing ctrl+c while it is running. Make sure it still exits as you expect. Test ctrl+z as well. Try this several times, and in different orders, to make sure your code is robust.

  7. Implement the fg builtin, which takes a stopped process – stopped presumably because it was running in the foreground at the time you pressed ctrl-z – and prompts it to continue, or it takes a process running in the background and brings it into the foreground. The fg builtin takes job number, translates that job number to a process group ID, and forwards a SIGCONT on to the process group via a call to killpg(groupID, SIGCONT). Again, right now, process groups consist of just one process, but once you start to support pipelines, you’ll want fg to bring the entire job into the foreground, which killpg can help with. After sending SIGCONT, update the job state to kForeground, and then wait for the job to stop/terminate or for SIGINT/SIGTSTP to come in, same as you did in createJob. Be sure to decompose and avoid copy/paste.

    Of course, if the argument passed to fg isn’t a number, or it is but it doesn’t identify a real job, then you should throw an STSHException that’s wrapped around a clear error message saying so. You will find the parseNumber function in stsh-parse-utils to be helpful.

    Testing suggestion: Run ./spin 5, press ctrl+z, run jobs to see the job number (in square brackets), run fg jobNum, and make sure that spin runs for another few seconds.

    stsh> ./spin 5
    ^Zstsh> jobs
    [1] 2013794 Stopped      ./spin 5
    stsh> fg 1            <sleeps for 4 more seconds before showing shell prompt....>
    stsh> jobs
    stsh>
    

    Try pressing ctrl+z and running fg several times.

    We recommend testing this with ./spin instead of sleep. When sleep 5 starts running, it calculates the time 5 seconds in the future and sleeps until that time. This means that if you press ctrl+z and then take 5+ seconds to type fg jobNum, sleep may exit immediately instead of sleeping for several more seconds. By contrast, ./spin 5 will spin on the CPU for 5 whole seconds, so if you pause in the middle, it will still continue sleeping when it comes back.

  8. Add support for background jobs. The pipeline constructor already searches for trailing &'s and records whether or not the pipeline should be run in the background in the pipeline struct. A background job should be run exactly the same as a foreground job, except you should pass kBackground to joblist.addJob(), and you should not use the sigwait loop to wait for the job to finish. (It’s running in the background, after all.) Also, when a pipeline is started in the background your shell should print out a job summary that’s consistent with the following output:

    stsh> sleep 10 | sleep 10 | sleep 10 &
    [1] 27684 27685 27686
    

    (There are no supplied functions that construct this output; you’ll have to print the job ID in square brackets and loop over the processes to print the PIDs. Also, you’re only handling a single process right now, so there will only be one PID in the output, but this is what it should look like when you implement multiple processes later on.)

    This introduces a complication: if we aren’t waiting for the child process to exit, then when do we update the job list? Well, the only time it’s important for the job list to be updated is when we are printing it, which happens when handling the jobs builtin. An additional complication: we can have multiple background jobs running at the same time, so we might have any number of child processes that we need to update the status of.

    We can handle this! Before printing the job list, call waitpid repeatedly to get any status updates for child processes. You can’t call waitpid normally as you did earlier in the assignment, because if all child processes are running and haven’t had any state changes, then waitpid will block and we won’t print out the job list until a child stops/continues/terminates. That’s not good. But if we add the WNOHANG flag to waitpid, then waitpid will check if child processes have had state changes without waiting for them. If some children have had state changes (i.e. we want to update the job list), waitpid will return the PID of one such child; if no children have changed state, it will immediately return 0 without blocking. With this in mind, we can write something like the following:

    while true:
        call waitpid with WNOHANG and any other additional flags
        if waitpid returned 0 or -1, there are no more updates, so break out of the loop
        update the child's status in the job list
    

    Be sure to consolidate any redundant code with the waitpid code you previously wrote for createJob and fg. You should only have one waitpid call in all of your code. With background processes involved, it’s crucial that you call waitpid on pid -1 with WNOHANG even in createJob and fg, because otherwise, SIGCHLD signals generated from background processes might cause you to call waitpid when waiting for the foreground job to finish, and if you haven’t done this properly, your shell will block on waitpid and ignore any incoming SIGINT/SIGTSTP signals.

    Testing suggestion: Run sleep 5 & twice. The stsh> prompt should print immediately. Run jobs, and you should see the jobs running. (There should be no delays in seeing the jobs output.) Wait 5 seconds, then run jobs again, and the jobs should be gone.

    stsh> sleep 5 &
    [1] 38070
    stsh> sleep 5 &
    [2] 38071
    stsh> jobs
    [1] 38070 Running      sleep 5
    [2] 38071 Running      sleep 5
    stsh> jobs
    stsh>
    
  9. Add support for the bg command, which is almost identical to fg but continues a job in the background. You should unify as much code as possible with fg.

  10. Add support for slay, halt, cont, which send SIGKILL, SIGTSTP, and SIGCONT to a single process (as opposed to fg and bg, which sent signals to an entire group). Be sure to unify as much code as possible, and be sure to guard against errors in user input.

    You do not need to update the job list for these builtins. It’s unnecessary, and keep in mind that sending SIGTSTP isn’t even guaranteed to stop the process. Instead, let the jobs builtin take care of updating the list.

    Testing suggestion: Run ./spin 20 (or 30, or 60) and test out each builtin, ensuring it behaves as expected:

    stsh> ./spin 20 &
    [1] 38295
    stsh> halt 1 0
    stsh> jobs
    [1] 38295 Stopped      ./spin 20
    stsh> cont 1 0
    stsh> jobs
    [1] 38295 Running      ./spin 20
    stsh> slay 1 0
    stsh> jobs
    stsh>
    

    If the jobs output is not what you expect, that could be a bug in your jobs builtin, or it could be a bug in slay/halt/cont. Try adding print statements, especially around your waitpid calls, to make sure that your jobs builtin is picking up on all state changes. You can also use the ps command from milestone 4 to make sure that the child process states are changing as they’re signaled.

The following are additional milestones you need to hit on your way to a fully functional stsh. Each of these bullet points represents something larger.

  1. Add support for foreground jobs whose leading process requires control of the terminal (e.g. cat, more, emacs, vi, and other executables requiring elaborate control of the console). You should investigate the tcsetpgrp function as a way of transferring terminal control to a process group, and update your solution to call it from the first child process in a pipeline. getpgid may be helpful. Note: you must block, handle, or ignore SIGTTOU in the child process before calling tcsetpgrp. The good news is that we’ve already provided this to you in the starter code, so you don’t have to worry about it. If tcsetpgrp(STDIN_FILENO, pgid) succeeds, then it will return 0. If it fails, it will return -1 and you should throw an STSHException.

    After a foreground job has falls out of the foreground (e.g. it exits or stops), stsh should take control back so that it can prompt the user for more input.

    You’ll also need to update your fg builtin so that stsh gives terminal control to the job before resuming it, and takes control back after it is no longer in the foreground.

    Note: if you are trying to run stsh in the cplayground debugger, you should know that tcsetpgrp doesn’t work well there, so you may need to comment out your tcsetpgrp calls to debug anything there.

  2. Add support for pipelines consisting of two processes (i.e. binary pipelines, e.g cat /usr/include/stdio.h | wc). Make sure that the standard output of the first is piped to the standard input of the second, and that each of the two processes are part of the same process group, using a process group ID that is identical to the pid of the leading process. You needn’t do much error checking: You can assume that all system calls succeed, with the exception of execvp, which may fail because of user error (misspelled executable name, file isn’t an executable, lack of permissions, etc.). You might want to include more error checking if it helps you triage bugs, assert the truth of certain expectations during execution, and arrive at a working product more quickly, but do all that because it’s good for you and not because you’re trying to make us happy. (Hint: the conduit user program we dropped in your repo starts to become useful as soon as you deal with nontrivial pipelines. Try typing echo 12345 | ./conduit --delay 1 in the standard shell to see what happens, and try to replicate the behavior in stsh.)

    Note that before, createJob and fg only needed to wait for one process to finish, so you could get by with a single waitpid call, but now we have multiple processes. Remember that signals are not queued, so if two child processes finish at the same time, sigwait might only return a single SIGCHLD signal. This means that you must call waitpid in a loop with WNOHANG, the same as you did in milestone 8. (Really, this code should be decomposed into a single function.)

    Also, we highly recommend using pipe2 with O_CLOEXEC instead of calling pipe:

    int fds[2];
    pipe2(fds, O_CLOEXEC);
    

    The O_CLOEXEC flag is short for “close on exec.” When the child processes call execvp, the file descriptors created by pipe2 will be automatically closed. This means you don’t need to worry about closing pipe file descriptors in the child (although closing them isn’t an error). You still need to close the file descriptors in the parent.

    You only need to call tcsetpgrp in the first process of the pipeline, although calling it in all of the children isn’t wrong.

    Testing suggestions: Try running echo 12345 | ./conduit --delay 1; the characters 1, 2, 3, 4, and 5 should appear, with a one-second delay in between each character. Also try ctrl+c/z and the builtins from earlier milestones to make sure everything is working properly with two processes. In particular, you may want to verify that both children are being added to the same process group.

    See the Testing Resources section for recommendations on tools that might help you debug problems here.

  3. Once you get your head around pipelines of one and two processes, work on getting arbitrarily long pipeline chains to do the right thing. So, if the user types in echo 12345 | ./conduit --delay 1 | ./conduit | ./conduit, four processes are created, each with their own pid, and all in a process group whose process group id is that of the leading process (in this case, the one running echo). echo's standard out feeds the standard in of the first conduit, whose standard out feeds into the standard in of the second conduit, which pipes its output to the standard input of the last conduit, which at last prints to the console. Be sure to minimize code duplication with the previous milestone.

    Note that you’ll need to create multiple pipes here, which will require storing a variable number of file descriptors. There are many ways to do this, and you can do it however you like. Our solution uses a vector<array<int, 2>> to store this (see std::array).

    Testing suggestion: See the Testing Resources section for recommendations on tools that might help you debug problems here. Also, you should go over the basic milestones again, testing functionality like ctrl+c/ctrl+z and the builtins, ensuring that everything you implemented works properly with multiple processes.

  4. Finally, add support for input and output redirection via < and > (e.g. cat < /usr/include/stdio.h | wc > output.txt). The names of input and output redirection files are surfaced by the pipeline constructor, and if there is a nonempty string in the input and/or output fields of the pipeline record, that’s your signal that input redirection, output redirection, or both are needed. Any open calls should be made in the child, not the parent. If the file you’re writing to doesn’t exist, create it (O_CREAT), and go with 0644 (with the leading zero) as the octal constant to establish the rw- r-- r-- permission. If the output file you’re redirecting to already exists, then truncate it using the O_TRUNC flag. Note that input redirection always impacts where the leading process draws its input from and that output redirection influences where the caboose process publishes its output. Sometimes those two processes are the same, and sometimes they are different. Type man 2 open for the full skinny on the open system call and a reminder of what flags can be bitwise-OR’ed together for the second argument.

    Testing suggestion: You’re done implementing a fully-fledged shell! Try out pipelines of varying lengths that use input/output redirection. Be sure to use the tools in Testing Resources to ensure you don’t have any leaked file descriptors.

Shell Driver

Note: You don’t have to understand this section or how to use the shell driver. However, it will be useful if you want to write your own tests. (That’s useful for quickly testing the shell each time you make a change, alerting you if you’ve broken anything unexpected.) The sanitycheck tests are not thorough.

The stsh-driver program (there’s a copy of it in your repo) executes stsh as a child process, sends it commands and signals as directed by a trace file, and allows the shell to print to standard output and error as it normally would. The stsh process is driven by the stsh-driver, which is why we call stsh-driver a driver.

Go ahead and type ./stsh-driver -h to learn how to use it:

$ ./stsh-driver -h
Usage: ./stsh-driver [-hv] -t <trace> -s <shell> [-a <args>]
Options:
  -h         Print this message
  -v         Output more information
  -t <trace> Trace file
  -s <shell> Version of stsh to test
  -a <args>  Arguments to pass through to stsh implementation

We’ve also provided several trace files that you can feed to the driver to test your stsh. If you look drill into your repo’s samples symlink, you’ll arrive at /usr/class/cs110/samples/assign4, which includes not only a copy of my own stsh solution, but also a directory of shared trace files called scripts. Within scripts , you’ll see simple, intermediate, and advanced subdirectories, each of which contains one or more trace files you can use for testing.

Run the shell driver on your own shell using trace file bg-spin.txt by typing this:

./stsh-driver -t ./samples/scripts/simple/bg-spin.txt -s ./stsh -a "--suppress-prompt --no-history"

(the -a "--suppress-prompt --no-history" argument tells stsh to not emit a prompt or to use the fancy readline history stuff, since it confuses sanitycheck and autograder scripts.)

Similarly, to compare your results with those generated by my own solution, you can run the driver on ./stsh_soln shell by typing:

./stsh-driver -t ./samples/scripts/simple/bg-spin.txt -s ./samples/stsh_soln -a "--suppress-prompt --no-history"`

The neat thing about the trace files is that they generate the same output you would have gotten had you run your shell interactively (except for an initial comment identifying the output as something generated via stsh-driver). For example:

$ ./stsh-driver -t ./samples/scripts/advanced/simple-pipeline-1.txt -s ./samples/stsh_soln -a "--suppress-prompt --no-history"
# Trace: simple-pipeline-1
# ------------------------
# Exercises support for pipes via a foreground pipeline with
# just two processes.
stsh> /bin/echo abc | ./conduit --count 3
aaabbbcccdddeeefffggghhhiiijjj

The process IDs listed as part of a trace’s output will be different from run to run, but otherwise your output should be exactly the same as that generated by my solution.

The trace files can contain regular shell commands as well as a few special extra commands (check out the provided trace files for some examples):

Testing resources

Here is a cplayground with the starter code. This may be helpful for debugging file descriptor wiring, and there’s a chance it might help for debugging signals.

Note: it appears tcsetpgrp doesn’t play well with gdb as used by cplayground, so you’ll need to comment out any tcsetpgrp error checking.

In order to enable the debugger, you’ll need to set a breakpoint somewhere even if you don’t actually need to step through the code line-by-line. I just set a breakpoint on the last line of main (return 0;), then opened the Open Files tab. Here’s an example output showing a pipeline of 3 processes with output redirection to a file:

I’ve also written a script that you can run directly on myth. The display isn’t as nice as cplayground, but you can run it on your code without needing to copy anything into cplayground.

Open two terminals logged into the same myth machine. Then:

Here’s the same example (sleep 10 | ./conduit | cat > out.txt) shown using inspect-fds.py:

The stsh process has stdin/out/err pointing at the terminal, with no leaked file descriptors. sleep has stdout going into a pipe that ./conduit is reading from; ./conduit has stdout going into a pipe that cat is reading from; and cat's output is going into out.txt.

Tips and Tidbits

Submitting your work

Once you’re done, you should test all of your work as you normally would and then run the infamous submissions script by typing ./tools/submit.