Learning Catalytics workflow

Disclosure: my colleague, Georg Rieger, and I are currently in the process of securing post-doc funding to evaluate the effectiveness of Learning Catalytics and that position would be paid in part by Pearson, who owns Learning Catalytics.

photo

A whole lotta devices!

I have been using Learning Catalytics, web-based “clickers on steroids” software, in a lecture course and a lab course since the start of September. In this post I want to focus on the logistical side of working with Learning Catalytics in comparison to clickers, and just touch briefly on the pedagogical benefits.

I will briefly summarize my overall pros and cons of with using Learning Catalytics before diving into the logistical details:

  • Pro: Learning Catalytics enables a lot of types of questions that are not practical to implement using clickers. We have used word clouds, drawing questions (FBDs and directions mostly), numerical response, choose all that apply, and ranking questions. Although all of these question types, aside from the word clouds, are possible as multiple-choice if you have or are able to come up with good distractors, Learning Catalytics lets you collect their actual answers instead of having them give you best guess from a selection of answers that you give to them.
  • Con: Learning Catalytics is clunky. The bulk of this post will discuss these issues, but Learning Catalytics has larger hardware requirements, relies heavily on good wifi and website performance, is more fiddly to run as an instructor, and is less time-efficient than just using clickers (in the same way that using clickers is less time-efficient than using coloured cards).
  • Pro: The Learning Catalytics group tool engages reluctant participants in a way that no amount of buy-in or running around the classroom trying to get students to talk to each other seems to be able to do. When you do the group revote potion of Peer Instruction (the “turn to your neighbor” part), Learning Catalytics tells the students exactly who to talk to (talk to Jane Doe, sitting your your right) and matches them up with somebody that answered differently than them. Although this should not be any different than instructing them to “find somebody nearby that answered differently than you did and convince them that you are correct,” there ends up being a huge difference in practice in how quickly they start these discussions and what fraction of the room seems to engage in these discussions.

Honestly, the first two points make it so that I would favour clickers a bit, but the difference in the level of engagement thanks to the group tool is the thing that has sold me on Learning Catalytics. Onto the logistical details.

 

Hardware setup

As you can see from the picture, I use a lot of devices when I teach with Learning Catalytics. You can get away with fewer devices, but this is the solution that meets my needs. I have tried some different iterations, and what I describe here is the one that I have settled on.

  • In the middle you will find my laptop, which runs my main slide deck and is permanently projected on one of the two screens in the room. It has a bamboo writing tablet attached to it to mark up slides live and will likely be replaced by Surface Pro 3 in the very near future.
  • At the bottom is my tablet (iPad), which I use to run the instructor version of Learning Catalytics. This is where I start and stop polls, choose when and how to display results to the students and other such instructorly tasks. The screen is never shared with the students and is analogous to the instructor remote with  little receiver+display box that I use with iclickers. Since it accesses Learning Catalytics over wifi and is not projected anywhere, I can wander around the room with it in my hand and monitor the polls while talking to students. Very handy! I have also tried to do this from my smartphone when my tablet battery was dead, but the instructor UI is nowhere near as good for the smartphone as it is for larger tablets or regular web browsers.
  • At the top is a built-in PC which I use to run the student version of Learning Catalytics. This displays the Learning Catalytics content that students are seeing on their devices at any moment. I want to have this projected for two reasons. First, I like to stomp around and point at things when I am teaching so I want the question currently being discussed or result currently being displayed to be something that I can point at and focus their attention on instead of it just being on the screens of their devices. Second, I need the feedback of what the students see at any moment to make sure that the question or result that I intended to push to their devices has actually been pushed to their devices. For the second point, it is reasonable to flip back and forth between instructor and student view on the device running Learning Catalytics (this is what one of my colleagues does successfully), but I find that a bit clunky and it still doesn’t meet my stomping around and pointing at stuff need. The instructor version of Learning Catalytics pops up a student view and this is what I use here (so technically I am logged in as an instructor on two devices at once). The student view that pops up with the instructor version makes better use of the projected screen real estate (e.g., results are shown along the side instead of at the bottom) than the student version that one gets when logging in using a student account.

 

Other logistics

The trade-off when going from clickers to Learning Catalytics is that you gain a bunch of additional functionality, but in order to do so you need to take on a somewhat clunky and less time-efficient system. There are additional issues that may not be obvious from just the hardware setup described above.

  • I am using 3 computer-type devices instead of a computer and clicker base. Launching Learning Catalytics on a device takes only a bit longer than plugging in my iclicker base and starting the session, but this is still one or two more devices to get going (again, my choice and preference to have this student view). Given the small amount of of time that we typically have between gaining access to a room and the time at which we start a class, each extra step in this process introduces another possible delay in starting class on time. With 10 minutes, I find I am often cutting it very close and sometimes not ready quite on time. In two of approximately twelve lectures where I intended to use Learning Catalytics this term, there was a wifi or Learning Catalytics website problem. Once I just switched to clickers (they have them for their next course) and the other time the problem resolved quickly enough that it just cost us a bit of time. When I remember to do so, I can save myself a bit of time by starting the session on my tablet before I leave my office.
  • The workflow of running a Learning Catalytics question is very similar to running a clicker question, but after six weeks of using Learning Catalytics, clickers feel like they have a decent-sized advantage in the “it just works” category. There are many more choices with the Learning Catalytics software, and with that a loss of simplicity. Since I did have the experience a few weeks ago of using clickers instead of Learning Catalytics, I can say that the “it just works” aspect of the clickers was reinforced.
  • Overall, running a typical Learning Catalytics question feels less time-efficient than a clicker question. It takes slightly longer to start the question, for them to answer and then to display the results. This becomes amplified slightly because many of the questions we are using require the students to have more complicated interactions with the question than just picking one of five answers. All that being said, my lecture TA and I noted last week that it felt like we finally got to a point where running a multiple-choice question in Learning Catalytics felt very similar in time from beginning to end as with clickers. To get to this point, I have had to push the pace quite a bit with these questions, starting my “closing the poll” countdown when barely more than half of the answers are in. So I think I can run multiple choice questions with similar efficiency on both systems now, but I am having to actively force the timing in the case of Learning Catalytics. However, having to force the timing may be a characteristic of the students in the course more than the platform.
  • Batteries! Use of Learning Catalytics demands that everybody has a sufficiently charged device or ability to plug their device in, including the instructor. This seems a bit problematic if students are taking multiple courses using the system in rooms where charging is not convenient.
  • Preparing for class also has additional overhead. We have been preparing the lecture slides in the same way as usual and then porting any questions we are using from the slides into Learning Catalytics. This process is fairly quick, but still adds time to the course preparation process. Where it can become a bit annoying, is that sometimes the slide and Learning Catalytics versions of the question aren’t identical due to a typo or modification that was made on one platform, but accidentally not on the other  There haven’t been a ton of these, but it is one more piece that makes using Learning Catalytics a bit clunky.
  • In its current incarnation, it seems like one could use Learning Catalytics to deliver all the slides for a course, not just the questions. This would be non-ideal for me because I like to ink up my slides while I am teaching, but this would allow one to get rid of the need for a device that was projecting the normal slide deck.

 

In closing

An instructor needs to be willing to take on a lot of overhead, inside the class and out, if they want to use Learning Catalytics. For courses where many of the students are reluctant to engage enthusiastically with the peer discussion part of the Peer Instruction cycle, the group tool functionality can make a large improvement in that level of engagement. The additional question types are nice to have, but feel like they are not the make or break feature of the system.

Advertisements

I finally got to meet my students from the international college

Last week was a historic time for us at Vantage College (the International first-year transfer College that gets 2/3 to 3/4 of my time depending on how you count it or perhaps who you ask). Our very first students evar arrived. For the past week, they have been participating in a 1500ish-student orientation program for international and aboriginal students on campus. I have been the faculty fellow for a group of 20ish students, and in addition to our scheduled activities, I have been going out of my way to join them for lunch (which sometimes involves taking selfies with the students). 

As a quick reminder, Vantage is one-year residential college at UBC for international students that had incoming English scores a bit too low for direct entry. I am teaching our enriched physics course to these students and all of the courses in the college have additional language support in addition to the regular course support. Those that are successful in the program will be able to transfer into their second year of various programs and complete potentially complete their degree in four years. Although I am regular right-before-the-term-starts busy, I want to quickly reflect on things…

  • Although it wasn’t obvious to me at first, one of the reasons I am really excited about this is that it is a cohort-based program, where our science courses max out at 75 seats. I am really looking forward to building relationships with the students over this time and then hopefully getting to see them continue on to be amazingly successful UBC students.
  • Our teaching team and support staff might just be the most fantastically talented group of people on campus from a per-capita-awesomeness standpoint. And we (the physics team) have been poaching some of the best TAs in our department to work with us. Our leadership team is supportive, just as excited as the rest of us and seem to manage that magical combination of having both the students’ and instructors’ best interests in mind.  
  • The average level of conversational English of the students I have met so far has been much higher than I was expecting. The conversations may be slow and involve some repetition, but I have been able to have lots of genuine conversations.
  • The students are excited! We’re excited!
  • It has been really fun discovering how many cultural references and touchstones I take for granted. I was with a group of students when a Star Wars reference (our computer science department has a wing called “x-wing”) came up. Not that I was expecting them to get the reference, but I said something to the effect of “haha, x-wing, that’s a thing from Star Wars” and then realized that most of them didn’t even know what Star Wars was.
  • Designing a program from scratch has been a great experience. In the end our Physics courses are frighteningly similar to what my small first year courses looked like at UFV, but we got there through a lot of discussions, weighing options, etc

So meeting the students has turned all of this abstract planning quite real and what used to be the future into the present. I am so delighted to have met so many of the students with whom I’m going to spend the next 10 months. 


Help me figure out which article to write

I have had four paper proposals accepted to the journal Physics in Canada, which is the official journal of the Canadian Association of Physicists. I will only be submitting one paper and would love to hear some opinions on which one to write and submit. I will briefly summarize what they are looking for according to the call for papers and then summarize my own proposals.

Note: My understanding is that the tone of these would be similar to articles appearing in the Physics Teacher.

Call for Papers

Call for papers in special issue of Physics in Canada on Physics Educational Research (PER) or on teaching practices:

  • Active learning and interactive teaching (practicals, labatorials, studio teaching, interactive large classes, etc.)
  • Teaching with technology (clickers, online homework, whiteboards, video- analysis, etc)
  • Innovative curricula (in particular, in advanced physics courses)
  • Physics for non-physics majors (life sciences, engineers, physics for non-scientists)
  • Outreach to high schools and community at large

The paper should be 1500 maximum.

My proposals

“Learning before class” or pre-class assignments

  • This article would be a how-to guide on using reading and other types of assignments that get the students to start working with the material before they show up in class (based on some blog posts I previously wrote).

Use of authentic audience in student communication

  • Often, when we ask student to do some sort of written or oral communication, we ask that they target that communication toward a specific imagined audience, but the real audience is usually the grader. In this article I will discuss some different ideas (some I have tried, some I have not) to have student oral and written tasks have authentic audiences; audiences that will be the target audience and actually consume those communication tasks. This follows on some work I did this summer co-facilitating a writing across the curriculum workshop based on John Bean’s Engaging Ideas

Making oral exams less intimidating

Update your bag of teaching practices

  • This would be a summary of (mostly research-informed) instructional techniques that your average university might not be aware of. I would discuss how they could be implemented in small and large courses and include appropriate references for people that wanted to learn more. Techniques I had in mind include pre-class assignments, group quizzes and exams, quiz reflection assignments, using whiteboards in class, and clicker questions beyond one-shot ConcepTests (for example, embedding clicker questions in worked examples).

Your help

And where you come in is to provide me with a bit of feedback as to which article(s) would potentially be of the most interest to an audience of physics instructors that will vary from very traditional to full-on PER folks.


I have a new job at UBC

Dear friends. I am very excited to let you know that at the end of this week I will have officially started my new job as a tenure-track instructor in the department of physics and astronomy at the University of British Columbia.

This is the department from which I received my PhD, so it is sort of like going home. The department has a great nucleus of Physics Education Research researchers, dabblers and enthusiasts, and thanks mostly to the Carl Wieman Science Education Initiative, there is also a large discipline-based science education research community there as well. I have a lot of wonderful colleagues at UBC and I feel very fortunate to start a job at a new place where it should already feel quite comfortable from the moment I start.

A major portion of my job this coming year is going to be curriculum development for a new first-year international student college (called Vantage). I will be working with folks like myself from physics, chemistry and math, as well as academic English language instructors to put together a curriculum designed to get prepare these students for second-year science courses. I will be teaching sections of the physics courses for Vantage College and bringing my education research skills to bear on assessing its effectiveness as the program evolves over the first few years. Plus I will be teaching all sorts of exciting physics courses in the department of physics and astronomy.

The hardest part about leaving UFV is leaving my very supportive colleagues and leaving all my students that have not yet graduated. Fortunately it will be easy for me to head back for the next couple of years to see them walk across the stage for convocation (and not have to sit on stage cursing that coffee that I drank). 

Stay tuned for some new adventures from the same old guy.


Summer 2012 Research, Part 1a: Bonus content for immediate feedback during an exam bonus content

This is a quick follow-up to my previous post on my research related to the effect of immediate feedback during exams.

I love it when I’m going through my “to read” pile of papers and realize that there is something in there related to one of my own research questions. There is a paper from last year (Phys. Rev. ST Physics Ed. Research 7, 010107, 2011) by Fakcharoenphol, Potter and Stelzer from University of Illinois at Urbana-Champaign that looked at how students did on matched pairs of questions as part of preparing for an exam.

There’s a lot of interesting stuff in this paper, but the result which is most relevant to my own research is the following. They developed a web-based (voluntary) exam preparation tool where students would do a question, receive feedback as just the answer or as a solution, then do a matched question where they received the other type of feedback. They divided the students into four groups so that every student had equal access to answer vs. solution feedback on the questions.  For each matched question pair (let’s call them questions A and B), the grouping of the students allowed half of the students to answer question A first and the other half to answer question B first. Within each of those groups, half of the students received answer only feedback for their first question and the other half received solution feedback for their first question.

They called the first question of a pair answered by the students their baseline and the students scored 58.8%±0.2% on those questions. Keeping in mind that they had many pairs of questions, the average performance of the students on the follow questions was 63.5±0.3% when only the answer was supplied after answering the first question and 66.0±0.3% when the solution was  provided after answering the first question. There are statistically significant differences between all of these numbers, but the gains from receiving the feedback are not overly impressive. More on this in a moment.

Back to my own research. During an exam, I used matched pairs of questions and gave the students feedback on their first question (in the form of just the answer) before they answered the second question. I saw a statistically significant improvement from the first question (65.3±6.8%) to the second one (77.5±6.0%), but due to low statistics there was not much to conclude other than it was worth pursuing this research study further. The results from the UIUC folks set the magnitude scale for the effect I will see once I am able to improve my statistics (58.8%±0.2% to 63.5±0.3% due to solutions only feedback).

I’m really not certain if I expect to see less, equal or more improvement for my “during an exam feedback” design as their “preparing for an exam feedback” design. In their design, the level of preparation of their students when using their study tool is all over the map (they look at this in more detail in their paper) so it is not known if the learning effect due to the feedback also depends on when during their overall study plan they were using the tool (e.g. as a starting point for their studying vs. to check their understanding after having done a bunch of studying).  Since both our designs use multiple-choice questions (but preparation vs assessment conditions) I am not certain how guessing would play into everything.

I have to admit that if my future research into the effect of feedback during an exam finds that I am getting only a 5% gain (like UIUC did for their solution only feedback) from this intervention that I doubt that I would continue with the practice.


Tier 2 Canada Research Chair – Teaching and Learning at University of the Fraser Valley

Please pass this along to anybody you know that does research in the broad field of teaching and learning and is interested in setting up shop in beautiful British Columbia. I would love to see a strong physics or other science ed researcher get awarded this chair. The competition closes August 3rd.
The meaty bit of the posting (http://www.ufv.ca/es/Careers/Faculty/Re-Post_2011_185.htm):

The successful applicant will hold a doctoral degree (obtained in the last ten years) and will be an outstanding emerging scholar who has demonstrated innovation and a proven ability to cultivate multidisciplinary, collaborative partnerships in local, national, and international research networks. The candidate must possess an original and independent research program in the general area of teaching and learning, the use of new teaching technologies and innovative pedagogical approaches relevant to the post-secondary education level.

The goals of the CRC program (www.chairs-chaires.gc.ca) are to promote leading edge research and the training of highly qualified personnel at universities.


I’m looking for feedback on my computational physics course

Overview

This will be my first course in which I use some flavour of standards-based grading. This will also be the first time I have taught a computational physics course (or more accurately a computational thinking for physicists course). I do have some experience helping students develop some computational thinking skills from my Advanced Lab course, but I have never taught a full course dedicated to this topic. Fun!

The idea is that I will be getting them to work in Python and Mathematica in parallel. My main purpose behind this is to help the students see the platform-agnostic computational thinking which underlies their their computational tasks. A side benefit is that I can help Python and Mathematica put aside their differences and work toward a common goal.

This course will be based around a weekly cycle of

  • Basic tasks, to be completed in both the Mathematica and Python computing environments; and
  • Intermediate and advanced tasks, to be completed using either of the computing environments.
These tasks will be framed to the students as one, but not the only way that they can show proficiency for a standard or set of standards. I know that a handful of people coming into the course already have some experience working with Mathematica and I look forward to them collecting their previous work to start knocking off standards.

The grading part of SBG

Students will earn marks for each of the content standards and their final grade will come from points earned by demonstrating appropriate mastery of these standards. I will give the students example tasks that correlate to the standards, but the students will always have the option of performing any computational task that they wish that shows appropriate mastery of the standards.

Approximately 2/3rds of the way through the course, the students will start working on a project which models a complex physical system. I have two standards associated with this project (a physics one and a communication one) and I am planning to either weight these standards more heavily than the others or make them into a set of more fine-grained standards. I really haven’t decided on this.

Assessing an individual standard

I plan to assess each standard on a 0-4 scale. I’m not in love with my use of the term “completed” below when discussing tasks, but the idea is that the most important thing to me is that their programs do exactly what they are meant to do.

I think that I will be allowing students to partner up which means that I need a mechanism to assess the individual beyond just a working (and properly commented) program. I think that the individual assessment will be for them to orally run me through their properly working program. Students will be given the option of submitting screencasts to do this, but not all students will have access to a computer that has both a mic and Mathematica so I will need to keep the in-person explanations as a method.

Here’s a rough outline of how I plan to assess an individual standard, where each

  • 4 – Exceeds expectations (student must make the case that they have exceeded expectations or have successfully completed an advanced task)
  • 3 – Meets expectations (student has completed the relevant intermediate tasks or otherwise demonstrated intermediate-level mastery in either environment)
  • 2 – Approaches expectations (student has completed the relevant basic tasks or otherwise demonstrated basic-level mastery in both environments)
  • 1 – Doesn’t meet expectations (student has completed the relevant basic task or otherwise demonstrated basic-level mastery in one of the environments)
  • 0 – Not yet assessed
Notes/caveats

  • Environment is the general term to describe either Mathematica or Python.
  • Program is the general term to describe a Python program/script or a Mathematica notebook.
  • To get a 3 or 4 you need to have shown proficiency at the 2 level (basic) for both environments. If basic proficiency is only shown for one environment, the score on that standard is reduced by 1.
  • To be eligible for reassessments, a 1 must be earned on a given standard within the first two weeks of the standard being opened.
  • Right now the connections to physics are not built in, but that is in the long-term goals for the course.

Very loose weekly plans

We will spend roughly one week on each of the following broad themes.

  1. Introduction to the environment (functions, variable types)
  2. Iteration basics and animation (introductory modeling: similar to the early Matter and Interactions VPython stuff)
  3. File input/output, basics array manipulation and case structures
  4. Advanced list/array operations and manipulation
  5. Data visualization and plotting (histograms, scatter-plots, bar charts, error bars, etc)
  6. Data analysis (basic statistical analyses, fitting)
  7. Solving complex algebraic equations, integration and differentiation
  8. Solving differential equations

After the students have started working on their projects we will spend less than half of our class time working on non-project topics.

  1. Introduction to the physics modeling project
  2. Monte Carlo methods
  3. Numerical methods
  4. Linear algebra
  5. FFTs

The standards

These standards are based on a collection of computational physics learning goals that were put together by Andy Rundquist, Danny Caballero and Phil Wagner. I then went around to all the faculty in my own department and asked them what skills they would like to see the students develop as part of this course and folded the common ones in.

There are some standards marked as “[ungraded]”. The idea with these is that they are things which do not seem to be worth assessing for one reason or another, but are things that I still want to highlight for the students as being important.

Onto the standards…

  1. Environment fundamentals
    1. [ungraded] I can use online and built-in help resources
  2. Computer algebra fundamentals
    1. I can represent and perform common operations with complex numbers and vectors.
    2. I can use built-in integration and differentiation functions. This includes using assumptions for integration.
    3. I can access mathematical and physical constants or define these constants globally when not available.
    4. I can solve algebraic equations. This includes simplifying algebraic results and using a root finder. I can use graphical or other techniques to set appropriate neighborhoods for the root finder.
  3. Programming fundamentals
    1. [ungraded] I can assign and clear variables.
    2. I can write and use robust functions. The important characteristics of a robust function are (1) that they need no access to parameters outside of those which they were passed; (2) they are written such that they can easily be copied into any script or notebook and be used as intended; and (3) they can return information in the form of single parameters, vectors, arrays or other useful objects.
    3. I can use at least two different iteration methods to accomplish the same task.
    4. I can use case structures.
  4. Arrays and lists
    1. I can manipulate and slice arrays/lists. Slicing an array means to pick out columns or rows from larger arrays. Manipulation of an array includes transposing the array, replacing elements, and adding columns/rows to existing arrays
    2. [The wording needs work on this one] I can operate on entire arrays/lists instead of having to operate on the individual elements of the array/list.
  5. Numerical techniques
    1. I can write my own code (not call existing functions) to perform numerical integration with varying levels of precision (Trapezoidal rule, Simpson rule)
  6. Solving differential equations
    1. I can solve ODEs (1st Order, 2nd Order and Coupled) analytically. I can determine if an analytic solution exists.
    2. I can solve ODEs numerically. I can set initial conditions and the domain of the solution.
    3. I can solve a Partial Differential Equation and specify the boundary conditions appropriately.
  7. Data manipulation, input and output
    1. I can import data from a text file which has a standard format (e.g., comma-separated or tab-separated values).
    2. I can export data to a text file which has a standard format.
    3. I can filter and sort data.
  8. Plotting data, quantities and functions
    1. I can plot 1D and 2D continuous functions and discrete data. This includes being able to superimpose multiple plots on the same canvas (e.g., visual comparisons between data and models).
    2. I can use graphical solutions to solve problems, such as simultaneous equations problems.
    3. I can modify the important parameters needed to make a graph “nice”. This includes setting axis limits, adding axis labels, changing line or point styles, making semi-log and log-log plots, plotting error bars.
    4. I can create and interpret 2D, 3D, and density/image/false-colour plots.
    5. I can create vector-field plots.
    6. I can plot solutions to ODEs and PDEs.
  9. Data Analysis
    1. I can compute the average, standard deviation, median, etc., of a data set.
    2. I can fit a model (function) to data using weighted and unweighted chi-square minimization. I can extract the best fit parameters and their errors. I can use reduced chi-square and the scatter of the residuals to evaluate the “goodness of fit”.
    3. I can perform a Fast Fourier Transform (FFT). This includes being able to account for the Nyquist frequency, normalization of the FFT, converting a FFT into a power spectrum or spectral density and performing an inverse FFT.
  10. Monte-Carlo methods
    1. I can use Monte-Carlo methods to model systems which involve random processes.
    2. I can use Monte-Carlo methods to perform error propagation.
    3. I can use Monte-Carlo methods to perform integration
  11. Animation
    1. I can animate a physical system.
  12. Linear algebra
    1. I can perform typical matrix operations such as addition, multiplication, transposition, etc.
    2. I can find eigenvalues and eigenvectors.
    3. I can perform matrix decomposition and reduction
  13. Mathematica-specific standards
    1. [ungraded] The “/.” command.
    2. I can use the Manipulate command.
    3. I can use patterns as part of recursion and case.
  14. Documentation (Portfolio-based: the student must choose and submit their three best examples for each documentation standard)
    1. I can document the use of a function. This is specific to only the details of what goes in and out of the function, not the nuts and bolts of what the function does.
    2. I can use sectioning and big-picture documentation to communicate the overall use of a program as well highlighting and describing the purpose of the major sections of the program.
    3. I can use documentation to clearly explain how a complex chunk of code (such as a function) works.
  15. Project [these are to be expanded into multiple or more heavily weighted standards]
    1. I can model a complicated physical system.
    2. I can write an effective project report using LaTeX.

Some issues with the above standards

  1. Some of the linear algebra stuff overlaps with some of the computer algebra stuff (vectors and matrices).
  2. I would like the standards to be similar in scope to each other, but they are nowhere near that right now.

The feedback I am looking for

I would love to hear any and all feedback you might have, either in the comments below, or you can comment on a google-docs version of this post that I made publicly commentable. Some specific things I have in mind are…

What are the most important or least important standards on the list? If you have any opinions on which ones seem less essential or which ones seem absolutely essential, I would love to hear about it.

How can I improve my proposed assessment of the standards? Do you have any suggestions for alternate ways of assessing the standards at the individual level for work done by a group in this context?

Not this round

I have a very heavy teaching load in the fall (3 new-to-me upper-division courses) so I am trying to figure out appropriately sized chunks of ambition for each. I have lots of ideas for each course, but there is no way that I am going to find the time to do a decent job of everything I have in mind. With that in mind, you will notice that there is no emphasis on sense-making or interpretation in the standards or in the way the standards are assessed. I think Danny Caballero and colleagues are doing some fantastic work with this at CU, and I really want to fold these thing into the course in the future, but for this round these are going to have to be things that I bring up repeatedly in class, but don’t explicitly build into the course. Baby steps Ives. Baby steps.