Training End-Users to Program Industrial Robots

Exploring the effectiveness of interactive tutorials and block-based programming

ABSTRACT

A growing number of education initiatives are encouraging everyone to learn basic programming skills. These initiatives often rely on the use of low-threshold programming environments (e.g. block-based programming) and scaffolded activities that lead the user towards a specific outcome (e.g. tutorials). While both block-based programming and tutorials have been studied with younger learners, if and how they support adult end-user programmers in accomplishing authentic tasks is less well understood. This paper presents the findings from a study (N=79) on the effectiveness of an interactive tutorial system for a block-based programming environment for one-armed, industrial robots. Our results show that interactive tutorials are more effective than video tutorials as a training mechanism in terms of both correctness of solution and development time but result in no difference in perceived usability. This high usability score, along with high task completion rates, replicate and extend our earlier work, showing that non-experts can not only create robot programs in a virtual setting, but also in the more complex physical setting. This work contributes to our understanding of the design of accessible end-user programming tools and ways to support novices at accomplishing authentic programming tasks.

Introduction

The last two decades have seen a growth in research on ways to make programming more accessible to novices, focusing both on end-user programmers [41] as well as learners in educational contexts [26, 37]. One approach of growing popularity is the use of graphical block-based programming systems [10], which have been found to be an effective way to introduce novices to the practice of programming [28, 52, 64, 70]. While much of the research on block based programming has focused on purely virtual environments, there are a growing number of environments and studies showing its potential for programming physical robotics systems [7, 44, 63]. In conjunction with new programming interfaces, a growing body of research is revealing ways to build scaffolded tutorial systems that help introduce novices to foundational ideas of computing and author working programs [38, 39].

At the same time, advances in technology and manufacturing are changing the role of robotics systems in our world. As cost declines and capabilities expand, robots and robotic systems have the potential impact in a growing number of contexts. This includes new uses in the workplace as well as in the home. For example, the emergence of collaborative robots, which are designed to work alongside humans rather than replace them [1, 35], are poised to significantly change the way manufacturing occurs, especially for smaller companies that historically have not been able to take advantage of large-scale robotics automation [42]. As this trend advances, employers and employees are faced with a new challenge: programming (and re-programming) robots.

Historically, large companies have hired specialists to program their robots [50]. Yet hiring specialist is expensive with respect to financial cost as well as the time investment to get outside consultants up to speed on the specifics of the context and task at hand. To solve these problems, companies are increasingly looking to train existing employees to manage and maintain newly introduced robots . As these smaller companies seek to automate the myriad of activities where robots excel (e.g. repetitive movements or actions that require very high levels of precision), employees tasked with integrating robots into existing workflows must be able to define, or at least modify, routines for the robot to execute. As such, developing robot programming environments that are easy to use, as well as easy to learn, can have a significant impact on facilitating and advancing the introduction of robots into new contexts. Recently, the design and implementation of more accessible, intuitive approaches to controlling robots has been an active area of research [14].

The work presented in this paper draws together these two lines of research (block-based programming and interactive tutorials) towards the goal of putting the power of industrial robotics at the fingertips of novice programmers. More concretely, this paper seeks to answer the following research question:

How does the design of an interactive tutorial system for a block-based programming environment support novices in programming an industrial robot?

To answer this question, we created a block-based programming environment for an industrial robot called CoBlox [62]. We then created an interactive tutorial for CoBlox designed to lead novices through a set of foundational industrial robotics actions such as a pick-and-place routine. To evaluate our system and answer the stated research question, we conducted a user study in which 79 adult novices were trained on CoBlox using either our interactive tutorial system or by watching a short video covering the exact same functionality. Participants were then asked to complete a series of robot programming activities with an industrial robot. The sessions were video recorded and participant-authored programs were collected. At the conclusion of the activity, participants completed a usability survey and responded to a series of short-answer prompts.

The contribution of this work is two-fold. First, it advances our understanding of the design of interactive tutorials and the effectiveness relative to more conventional video-based tutorials. Second, it provides further evidence for the potential of the block-based programming paradigm for supporting end-user programming, expanding from virtual to physical robotics. These findings are novel in their application to the programming of physical robotics systems where the act of programming includes both the manipulation of code in a virtual context and the movement of the robot in the physical world. Further, it provides additional evidence for the effectiveness of block-based programming as an approach for implementing motion-based programming tasks. As the role for industrial robots expands and the need to train end-users to control them grows, understanding how best to support novices in successfully programming physical robotics systems has the potential to have a significant, meaningful impact.

Related Work

In this section, we briefly review three literatures related to this work block-based programming, tutorials, and end-user robotics programming, focusing specifically on the first two have impacted the third.

Block-based Programming

While not a recent innovation [11, 23], block-based programming is becoming increasingly wide-spread both within and beyond formal educational contexts. Led by the popularity of tools like Scratch [54] and Alice [23], as well as initiatives like Code.org’s hour of code [2] which uses numerous block-based environments, block-based programming is increasingly becoming the way that novices are being introduced to programming. Block-based programming uses a programming-command-as-puzzle-piece metaphor to provide visual cues as to how, where, and when a given programming command can be used. Writing programs in block-based environments takes the form of dragging-and dropping commands together. When two commands cannot be assembled to create a valid programming statement, the environment prevents them from being snapped together, thus preventing syntax errors in the authoring of programs. Research is revealing the various ways that block-based tools support novice programmers [66] and how it can serve as an effective way to introduce novices to foundational programming practices and computer science ideas [28, 32, 52, 64, 70]. Particularly relevant to this work, there is also comparative research showing the block-based programming approach to be effective for introducing novices to robotics programming [18, 56].

Given the success of block-based programming in the creation of digital media and interactive stories, a vibrant ecosystem of block-based environments has emerged. This includes block-based interfaces for mobile application development [57, 68], modeling and simulation tools [12, 34, 67], and even block-based video games [27, 65]. Likewise, a growing number of robotics tool kits are supporting block based programming including robotics tools kits for kids such as Dash-and-Dot [69], the BBC micro:bit [7], the Finch Robot [44], and OzoBots [48], as well as robotics toolkits that support more sophisticated applications, such as Open- Roberta [4] and the Modkit [47] and Arduviz [51] environments for programming the Arduino microcontroller.

Automated Tutoring Systems and Tutorials

A large body of work in the human-computer interaction field has focused on helping people understand how to interact with new and novel technologies [22, 31]. Likewise, the effectiveness of tutors in education has long been known [17] with a great deal of research and effort being put towards exploring ways to use computers and technology to achieve outcomes that match the impact of human tutoring [24, 61]. Of particular note is the significant body of research on intelligent tutoring systems [5] and, relevant for the work presented below, research on developing intelligent tutors for learning programming [25, 55]. In this review, we focus on tutorial systems and features most closely aligned with robotics and programming.

A recent review of online coding tutorials sampled 30 systems conceptually organized by type (e.g. interactive tutorial, educational game, and MOOCs) and generated a list of recommendations for tutorial systems, including the importance of engaging learners in active learning through writing code during the tutorial and goal-directed activities [39]. Likewise, there is a growing number of automated tutoring systems for learning to program in both text-based [3] and block-based contexts [38]. There is also a significant body of work on the role of automated visualizations in supporting novice programmers that play a role in automated tutors and tutorial systems [58]. Another form of support found to be useful for helping novices to author successful programs is automated hint-generation, for which tools have been developed specifically for block-based programming contexts [53].

Shifting to the HCI literature, research investigating ways to support sequential instructions has found it beneficial to integrate instructions into the application itself rather than have users interact with static pictures or artificial views of the interface residing outside the application itself [13, 29, 40]. One successful implementation of this approach in the context of learning to program is the use of stencils, which overlay a translucent layer on the interface to direct the user’s attention toward specific features and guide interaction [38]. Research has also shown that the inclusion of animations [33, 49] or videos [30] can improve the effectiveness of tutorials. Likewise, designing tutorials around specific, simplified tasks [21] and creating a gamified context [46] have also been found to be productive in training novices in the user of a new piece of software. This work, along with research on the design programming tutorials, informed the CoBlox interactive tutorial system presented below.

End-User Robotics Programming

Historically, controlling robots has taken the form of writing programs in a highly technical, often proprietary, programming language [14]. As such, becoming a capable robotics programmer required extensive training and those who authored the programs were different than those who use them. End-user programming, on the other hand, is defined as "programming to achieve the result of a program primarily for personal, rather public use" [41]. In the case of robotics programming, this means the author is creating programs to solve an immediate need of their own use rather than authoring a program to be used by others. [14] identify two major strategies for supporting end-user programming of robotics systems; manual and automatic.

In manual end-user robotic programming systems, the act of programming the robot retains the command-by-command interaction, but scaffolds are introduced to address many of the barriers that make the task so challenging. For example, there are a number of end-user programming approaches that use a visual programming approach to support novices, this includes environments such as Lego Mindstorms [36], MORPHA [16], as well as CoBlox [62], the block-based environment that is the focus of this research. A second widely used form of manual end-user robotics programming system allows users to define sequences of instructions by following predefined wizards and selecting options from structured menus.

In contrast to the manual approach to robotics programming, automatic end-user robotics programming hides the programming language from the user and instead provides other mechanisms for defining positions and instructions. Examples of automatic programming approaches include gesture-following robots that imitate human actions [19], learning systems [6], and programming-by-demonstration platforms [15] where the user "demonstrates" the desired actions which the robot "remembers" and then repeats. These manual approaches all rely on the user physically interacting with the robot and often depend on the use of a hand-held device, often called a teach pendant [43], in order to instruct the robot on what positions and movements to record.

The CoBlox environment and interactive tutorial seek to blend these two approaches, incorporating aspects of both manual programming in the form of defining sequential instructions in a block-based environment as well as the manual approach by allowing users to physically interact with the robot to define positions. Both of these components of the act of programming are introduced as part of the interactive CoBlox tutorial we introduce and evaluate below.

3 APPROACH

CoBlox is a block-based programming language and editor that novice programmers can use to program one-armed, industrial robots. It was one of the first attempts at using the block-based paradigm for end-user programming in a professional context [63]. It is well-suited for relatively simple, but common industrial tasks like pick and place and machine tending. CoBlox takes advantage of a number of features of the block-based programming approach to make the task of programming industrial robots more accessible to novices. This includes a domain-specific language with commands custom tailored to the robotic movements (e.g. Move quickly to

CoBlox Tutorial Vignette

To launch a CoBlox Interactive Tutorial, users click the “Tutorial” button which displays a list of available tutorials. The first tutorial teaches focuses on opening and closing the robot’s gripper. As soon as the tutorial starts, users are presented with their first prompt (Figure 1.a). The prompt reads:

“Click on the Grip drawer in the block toolbox”. Note that the prompt points directly at the drawer that the user is being asked to click on to provide clear, in-context links to the interface.

Upon seeing this prompt, users click on the Grip drawer which opens it, revealing the available set of grip-related commands. As soon as the Grip drawer is open, a new tutorial prompt appears instructing users to drag the Close hand block onto the workspace (Figure 1.b). Users then drag the block from the drawer and drop it in the center of the workspace. The block is rendered as slightly transparent as it not attached to the main program block. Upon dropping the block on the workspace, the next prompt appears telling users to attach this block to the main block, as in Figure 1.c. Upon snapping Close hand to the main block an animation appears showing users the behavior produced by this command (Figure 1.d).

Once the animation finishes a new prompt appears pointing to the Close hand button on the upper right side of the application. Pressing this button produces the same result as using the Close hand command in a program.

As the first CoBlox tutorial continues, users follow prompts to add and delete commands until they finally have a functional program and are ready to run it on the physical robot. The tutorial then walks users through this process, starting with a prompt to press the “Apply Changes” button, which downloads the code onto the robot. Users are then prompted to open the execution pane, where they can control the robot’s execution of their program. This concludes the first tutorial, returning users to the normal application mode from which they can either create their own program or proceed to another CoBlox tutorial.

Key Tutorial Features

CITs builds upon existing block-based tutorial systems with three key design features: immediate feedback, inline directions, and real-world coordination.

Existing tutorial systems often provide feedback, but it is not immediate. For instance, in current https://code.org tutorials users receive feedback only after completing their program and pressing “Run”. CoBlox Interactive Tutorials, in contrast, checks to see if feedback is needed after each new block is added providing immediate and actionable feedback.

A second innovation is inline direction. Many existing tutorial systems use a designated dialog or frame to provide direction which is often decontextualized from the programming environment. This requires users to interpret directions, finding the relevant on screen widgets themselves. CoBlox, following the Stencil approach used in Alice [38], provides directions and feedback directly beside (and pointing to) the relevant widget, as shown in 1.

To coordinate with physical systems, existing approaches often rely on supports outside of the programming environment, often in the form of reference manuals or printouts. This requires users to map between several different mediums (i.e. physical manual to digital programming environment to robot) when interpreting directions. To reduce this mapping burden, CITs tightly integrate the tutorial steps with the physical robot. When a user defines a location in CoBlox, it reduces their mapping burden by activating lead through mode on the robot arm and providing a demonstration video of the arm’s positioning on its screen.

4 EXPERIMENTAL EVALUATION

To evaluate CITs, we conducted a comparative user study where users were asked to use the CoBlox environment to program an industrial robot after being trained using either CITs or by watching a conventional training video. We chose to use a video-based tutorial as the comparison for two reasons. First, it is by far the most popular training solution for software and hardware systems [60]. Second, but related, is that it is a relatively inexpensive approach when compared to developing a tutorial system, yet it is still effective. This effort to benefit ratio is important to vendors when deciding which training approach to adopt.

The training methods

The interactive tutorials. As described in Section 3, CITs was built directly into CoBlox. For this study, we asked subjects to complete a sequence of four tutorials: (1) Using the robot hand, (2) Moving the arm, (3) Picking and placing an item, and (4) Calling a procedure. The four tutorials were designed to take around 15 minutes in total to complete. These four tutorials cover the concepts needed to achieve basic but useful industrial robot tasks. Subjects were not given a time limit when completing tutorials.

Figure 1: Screenshots a-d shows the CoBlox Interactive Tutorial system in action. To view the above as a video, open the file in  a viewer that supports video (e.g., Adobe Acrobat Reader) and press play or visit https://youtu.be/8EBI_XlGXJs.

Figure 1: Screenshots a-d shows the CoBlox Interactive Tutorial system in action. To view the above as a video, open the file in
a viewer that supports video (e.g., Adobe Acrobat Reader) and press play or visit https://youtu.be/8EBI_XlGXJs.
Figure 1: Screenshots a-d shows the CoBlox Interactive Tutorial system in action. To view the above as a video, open the file in a viewer that supports video (e.g., Adobe Acrobat Reader) and press play or visit https://youtu.be/8EBI_XlGXJs.
center

The video-based tutorial. The video-based tutorial was created by executing the steps from the interactive tutorials (with the tutorial guidance system turned off) with voiceover explanations. This means that the training method differed only in delivery method, not content. The tutorial video was created using screen-capture software which showed the CoBlox application. When appropriate, such as when running a program or positioning the robot, an inset view of the physical robot was shown (Figure 2). The video is 10 minutes and 51 seconds long. Participants in this condition watched the full video at the outset of their session.

Participants

Our prior research on the CoBlox environment focused on adult novices with no prior programming or robotics experience [62]. This study targets a similar, but more specific type of user: the technical novice. In contexts where collaborative robots are being deployed, management usually does not hire a robot programmer but instead assigns the programming responsibility to an existing employee. Thus, we expect employees with a technical background (e.g. a robot operator, mechanic, or another technical engineering discipline) to become the primary user of CoBlox. These users have some related expertise but little or no formal training in the specifics of robotics programming.

To match the technical novice profile, we recruited participants from a diverse set of office professionals at a large office site of a multinational engineering conglomerate in Bangalore, India. Within this site we recruited employees with some technical expertise (e.g. engineers, software developers, technical administrators) but did not do any form of robotics programming as part of their formal job responsibilities.

Participants were recruited via a single inter-office social media post in the technical group. This post, along with word-of-mouth communication, yielded hundreds of potential participants. Candidates were invited to participate 1-2 days prior to their potential session on a rolling basis, starting from the candidates who signed up first and proceeding down the list. A total of 90 participants participated in the study, of which 79 completed all aspects of the protocol and are thus included in the analysis below. Of the 11 participants that did not complete the study protocol, 2 decided to withdraw from the study with the remaining 9 encountering significant technical difficulties when completing their training or the tasks that were caused by the prototype nature of the programming environment and could not be resolved immediately. None of the included 79 participants included technical difficulties.

The average age of the participants is 30.6 years (SD 6.5 years). Participants came from a range of backgrounds: 45% engineers, 15% software developers, 19% researchers, and the rest were either administrative staff or worked in product support. Out of the participants who indicated their gender, 68% were male and 28% were female. Only 7 participants had any prior experience with programming robots (average 3.7 years). The rest of our participants had no prior knowledge when it came to working with robots. The study was approved by the institutional review board of the lead authors’ university with permission of the industry partner.

Figure 2: A screenshot of the video tutorial, showing the screencast (left, background) and the inset (right, foreground)
Figure 2: A screenshot of the video tutorial, showing the screencast (left, background) and the inset (right, foreground)
center
Figure 3: The robotics lab set up for Task #1. The LEGO-like object is being picked in order to be placed in the white box.
Figure 3: The robotics lab set up for Task #1. The LEGO-like object is being picked in order to be placed in the white box.
center

User Study Procedure

The study took place in an on-site robotics lab and was designed to last a total of 90 minutes. The robotics lab included the physical robot set up in front of a table with a series of objects on it that would be the focus of the programming tasks (Figure 3).

Each participant followed the same procedure, beginning with completing a consent form while the proctor initialized the robotics environment. The initialization procedure included physically positioning the robot in a standard pose, configuring key robot settings, starting screen capture software on the tablet running CoBlox, and starting a video camera recording of the lab. After completing the consent form the formal study protocol began. At the outset of the study, participants were handed a tablet (10.1 inch Lenovo MIIX 320 Atom running Windows 10 OS) that would be used for both the training and programming tasks.

The first step of the protocol was for the participant to be trained using either the CoBlox Interactive Tutorial or by watching the training video. After finishing the training, the participant was asked to complete three robot programming tasks. For each task they were given a time limit (discussed below). At the end of the time limit for a given task the participant was notified and allowed one final compilation and run to evaluate their final program’s correctness. A time limit was necessary as the study was conducted during the participants’ workday, so a fixed scheduled was needed.

After each task was completed the proctor collected recorded task-specific information (e.g. the program and timing data) and then reset the workspace for the next task. After finishing all tasks, participants were asked to complete a survey focused on demographics and their experience programming the robot.

Tasks. The participants were asked to complete three real-world- inspired pick and place tasks, which we refer to as Create, Modify, and Reverse, with a time limit of 15, 10, and 10 minutes respectively. The first task (Create) was the most challenging and time consuming, requiring participants to write a new program from scratch, while the Modify and Reverse tasks were shorter and asked participants to alter an existing program. Participants were given unlimited attempts at compiling and running their program within the allotted time.

The Create task asked participants to write a program to make the robot pick up an object from the work table and place it into a box. For the second task, Modify, the subjects were asked to modify an existing pick and place program so that the robot placed the object in a new location. For the final task, Reverse, the participants were provided a working program and asked to reverse it (i.e., the box’s and object’s locations were reversed). The Modify and Reverse tasks simulate common scenarios in collaborative robotics where the fixture changes requiring only relatively minor modification to existing, functional programs.

Data Collection and Analysis

As number of data sources were collected as part of this study. Below, each source is described along with a brief discussion of how it relates to the stated research question.

Surveys. We used a System Usability Scale (SUS) [20] to evaluate the usability of the CoBlox programming interface. SUS is a standardized way to measure the satisfaction perceived by users interacting with an interface and is commonly used in usability engineering [9]. The SUS survey is comprised of ten statements about the interface being evaluated. The odd numbered statements have a positive connotation (e.g., “I thought the CoBlox language was easy to use”) and the even numbered ones have a negative connotation (e.g., “I thought there was too much inconsistency in the CoBlox language”). The participants were asked to score their agreement with each statement on a five-point Likert scale. The data from the SUS are intended to help us both replicate prior findings as well as to understand if/how the different forms of tutorials impacted perceived usability.

After participants completed the SUS survey, they were asked to complete a short survey including demographic data (age, gender, job title, prior programming experience, etc.) and respond to a number of short answer questions to provide further insight into their experience. The short answer prompts asked participants to identify features of CoBlox that made the programming tasks easy, things they found difficult, any suggestions for improving CoBlox, and in the case where the participant had prior programming experience, how CoBlox compared to other programming tools they had used. The surveys were administered at the end of the protocol after the participants had completed the programming tasks. To analyze the textual data of the survey and interview responses, we used techniques based on Grounded Theory [59], in particular open coding and axial coding to determine higher level themes.

CoBlox Log Data. We collected logs from each participant as they worked in the CoBlox environment. These logs contain a timestamped entry for every action made in the CoBlox programming canvas (e.g. adding a new block, modify block inputs, defining robot locations). This detailed activity data allows us to determine the exact start and stop time for each user as well as what they did while working in CoBlox.

Program Correctness. To evaluate the correctness of the programs that participants authored, we recorded a copy of each program the participant created, one per task. We also collected both a screencast of the programming process and a video recording of both the creation process and the final program run. This video recording included the robot, the fixture (i.e., the object and all relevant containers), and the user.

By reviewing the program and a video of the program running, the second author evaluated the functional correctness of each participant’s final program on a 10-point scale. He did so using a rubric that allotted three points for each major sub-task (e.g., picking up the object, placing the object, etc.) and a final point for positioning the arm properly after finishing. For each sub-task all three points were given for correct execution (on the video). If the sub-tasks were correct at a high-level but a significant issue occurred (e.g., failing to pick up the object) two points were deducted, for a minor issue (e.g., the object was picked up, but at an odd angle) one point was deducted. To penalize working but overly complicated programs that would be harder to understand or maintain when used in a realistic scenario, he further scored the complexity of each final program on a 5-point scale. The maximum reachable score on this scale was capped relative to the correctness score to avoid improving the relative score of simple but incorrect programs. The sum of both scores was then used to evaluate participants on a total scale from 0 to 15 points.

5 RESULTS

The data set included in the analysis presented below is comprised of the 79 participants who completed the full study protocol, 38 of which were trained using the CoBlox Interactive Tutorials and the 41 who received training via video. In this section, we compare these two groups based on their time spent in the training phase as well as the programming accuracy and speed.

Time Spent on Training

Participants in the video group watched the 10 minutes and 51 second (non-interactive) video while participants in the tutorial group were trained by completing all four of the CoBlox Interactive Tutorials. Participants in the CoBlox Interactive Tutorials condition took an average of 17 minutes and 1 second (SD 6m 11s) to complete their training.

All participants were able to complete all tutorials, serving as one data point for the tutorials being accessible and easy-to-use for our technical novice population. This is reflected on by one participant who mentioned that the "Tutorial was good [with] easy to use command buttons" and another who even suggested that "Tutorials and tasks could be more challenging". While the majority of subjects quickly completed the tutorials, 15 participants spent more than 20 minutes on their training. Of these 15, many were often not as technologically sophisticated. For instance, the proctor had to demonstrate dragging and dropping on a touchscreen to one of these participants.

Programming Time

We measured participants’ time spent on solving each task. As shown in Figure 4, participants in the video training condition consistently needed more time to complete their tasks. Starting with the Create task, participants in the video condition spent an average of 13 minutes and 38 seconds (SD 3m 24s) completing their programs compared to 9 minutes and 59 seconds (SD 4m 1s) for participants trained via the CoBlox Interactive Tutorial, a statistically significant difference (t(77) = 4.37, p < .0001).

This timing data matches responses from the survey data, which shows that 45% of subjects trained via the tutorial strongly agree that CoBlox is quick to learn ( ) while only 36% of the video subjects strongly agree ( ). Similarly, when asked if they felt they had to learn a lot in order to get started 35% of tutorial subjects strongly disagreed while only 23% of video subjects strongly disagreed.

The open response questions from the survey provides insight into this result. Tutorial users consistently commented on the ease and speed of learning and using the CoBlox Interactive Tutorials, saying “Even common man can work with this language very quickly”, “CoBlox is simple and easy to use. Any layman can use this with a basic guide of 5 mins”, and “all features combined made it easy to program the robot”. When asked what features aided them in programming CoBlox, users clearly identified the CoBlox Interactive Tutorials, saying: “Easy to use blocks, Tutorial” and “User interface and the demo”, referring to the tutorial as a demo.

In contrast, subjects trained via video commented they wanted more, saying “if we get small brief from trainer before starting program. it will be easy to understand for those who are doing first time... I was not able to found options as first time I was doing”, that we should “have more examples”, and that we should have “error messages if user is doing something wrong.”

The difference programming times, Likert-scale responses, and tone of free responses show that participants who were trained via the CoBlox Interactive Tutorials were able to author industrial robot programs more quickly than those trained via video.

For the Modify and Reverse tasks, we see less difference between the two groups with respect to time-on-task. For Modify, the mean task times were 7 minutes and 20 seconds (SD 2m 56s) for video participants and 6 minutes and 11 seconds (SD 2m 48s) for tutorial subjects, a difference below the p < .05 significance threshold (t(76) = 1.64, p = .06). For the Reverse task, mean task times were 6 minutes and 24 seconds (SD 2m 50s) for the video condition and 5 minutes and 20 seconds (SD 3m 1s) for those trained via the CoBlox Interactive Tutorial, again not a significant at the p < .05 level (t(77) = 1.47, p = .07). One comment from an open response question gives a possible reason for the smaller differences between groups on these tasks, as the subject stating “usage for couple of times would make CoBlox easier for anyone”. Indeed, when asked whether others could quickly learn CoBlox only 5% of tutorial subjects ( ) and 10% of video subjects ( ) did not agree. This data shows that both tutorial and video trained subjects improved when moving from the initial create task to subsequent program alternation tasks.

As mentioned in the experimental set up, one of the constraints due to our participants pool is time limit imposed on each task. This reduces the potential variance between groups by imposing a ceiling on the time-on-task. Thus, in addition to studying the difference between mean times in groups, it is also important to see how many subjects from each treatment used the maximum time, as we show in Figure 5. For the first task, 15 video-trained participants and only two of the tutorial-trained participants required the full 15 minutes. On the second and third tasks, 28 video-trained and 9 CoBlox Interactive Tutorial trained participants used the full amount of allotted time. This data shows that the CoBlox Interactive Tutorial was also more effective at helping users who needed the greatest support in the task.

Program Correctness

Participants’ final programs for each task were scored on a scale from 0 to 15 points. As shown in Figure 6, participants trained via the CoBlox Interactive Tutorial achieved higher mean scores in all three tasks. For Create, the average for video subjects is 9.4 (SD 5.7) compared to 12.1 for tutorial subjects (SD 3.9), a statistically significant difference (t(77) = -2.44, p < .01). For the Modify task, participants scored higher collectively while we also see a statistically significant difference based on training method: 13.3 (SD 3.3) for CoBlox Interactive Tutorial and 11.0 (SD 5.5) for the video (t(76) = 2.29, p = .012). The Reverse task showed a similar pattern, with the CoBlox Interactive Tutorial condition having an average score of 13.9 (SD 2.6) and the video condition having an average score of 12.1 (SD 4.9), which is again significant (t(??) = -2.24, p = .014). Across all three tasks there were 8 participants that scored 0 points, meaning that they were unable to make any progress towards creating a successful program; all of these participants were in the video condition. The difference in successes by condition is mirrored in survey questions responses. When asked if they felt confident programming an industrial robot with CoBlox, 35% of tutorial subjects strongly agreed while only 26% of video subjects strongly agreed. This data shows that participant trained via the CoBlox Interactive Tutorial were more successful in writing functional, efficient programs than those trained via a conventional video tutorial.

Figure 4: Box-plot of the times spent on tasks (lower is better). Center lines show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend to 5th and 95th percentiles; data points are plotted as open circles.
Figure 4: Box-plot of the times spent on tasks (lower is better). Center lines show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend to 5th and 95th percentiles; data points are plotted as open circles.
center
Figure 5: Number of participants who used the maximum allowed time for each task.
Figure 5: Number of participants who used the maximum allowed time for each task.
center
Figure 6: Box-plot of the scores for participants’ solutions (higher is better). Center lines show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend to 5th and 95th percentiles; data points are plotted as open circles.
Figure 6: Box-plot of the scores for participants’ solutions (higher is better). Center lines show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend to 5th and 95th percentiles; data points are plotted as open circles.
center

Usability

The standard analytic approach for the System Usability Scale (SUS) is to convert responses into numbers on a 100 point scale, with higher scores meaning the system is more usable. While not a percentage, the resulting value can be interpreted as a percentile score, with a score of 68 being considered average [45] Bangor et al. [8] defined adjectives for ranges of scores to ease interpretation of the SUS score. We reuse this range for our analysis as well.

The mean SUS score for the CoBlox programming environment over the entire population of participants (i.e., without differentiating on the training received) was 75.60 (SD 11.9), or "excellent" based on the aforementioned interpretation scale. This data shows that the CoBlox environment scores highly in usability, regardless of training technique.

Participants that trained using the CoBlox Interactive Tutorial gave CoBlox a mean score of 76.68 (SD ??), while participants who were trained via video gave CoBlox a mean usability score of 74.48 (SD ??), a difference that is not statically significant (t(??) = ??, p=0.41). The lack of impact of training approach is possibly explained by the fact that CoBlox is easy to use without any prior training. As one CoBlox Interactive Tutorial participant claimed that “[the CoBlox] User interface is much better. Even common man can work with this language very quickly." Another participant echoed this sentiment in saying: “CoBlox is simple and easy to use. Any layman can use this with a basic guide of 5 mins". Subjects who were trained using the video had similar responses, saying: “The Drag and drop methodology of programming was very easy to use. The work-space of the tool was user friendly. The features like Procedures are very helpful in programming". Another participant reiterated this idea by stating: “The tasks were performed and developed with an ease. I believe that any movement of the robot could be easily performed by using a suitable drop box and defining the action under them." This data shows that, despite difference in speed and correctness, the training technique did not impact participant’s perceptions of system usability.

At the end of the survey, we asked for participant feedback on what they viewed as the strengths of the environment with a total of 58 participants providing responses. Of the responses received, 36 indicated that the ease of use (or simplicity) of the interface made CoBlox stand out for them. One participant referenced the “Easy drag and drop feature" and another participant put this succinctly as “Simple programming language". A further 15 participants felt that the best part of CoBlox was the procedures drawer that allowed them use predefined scripts. One participant put this as “Functionalities is a very good feature which could be used in very interesting manner". Finally, 4 participants mentioned that the CoBlox Interactive Tutorial system specifically in contributing to the overall positive experience. This data shows that subjects’ positive feedback centered around ease-of-use features.

We also ask participants about potential areas for improvement in CoBlox. Of the 49 responses we received, 12 mention that they would prefer a better way to handle errors in the execution of a program, or errors that originate from the robot (e.g. “Error handling, when 2 errors occurred, I had to restart twice [...]".) A few participants stated that they would like to have to ability to use their voice to define what blocks need to be placed on the canvas, as put by one participant “Like google home, it should use voice commands instead of doing programming [...]". Other responses included references to layout, a lack of keyboard shortcuts, and the touch-screen platform in general.

Replication and Extension of Prior Work

Our previous study showed that the majority of subjects could create robot programs using CoBlox. However, the initial study limited participants to working with virtual robots in a simulation rather than a real robot. Working with physical robots and actual objects introduces new challenges. The current study extends the prior work on CoBlox to examine if and how the scaffolds found to be effective for the virtual environment also helped novices when working with an actual, physical robot.

As part of this study, technical novices attempted a total of 237 industrial robot programming (79 participants x 3 tasks each). Across these tasks, participants achieved the maximum correctness score 48% of the time, with mean scores of 10.7 (SD 5.1), 12.1 (SD 4.7), and 13.0 (SD 4.1) for the three tasks. Based on their responses to the post-survey, these high scores were not a surprise. When asked if they felt confident when using CoBlox 90% of CoBlox Interactive Tutorial trained participants and 87% of video-trained participants either agreed or strongly agreed. Some even viewed the tasks as easy, writing things such as “Tutorials and tasks could be more challenging”, “[the task was] Very Easy”, “[the experiment] could have been longer with more complex tasks” in their post-programming survey responses.

Another indicator of success was subjects’ ability to finish tasks in time, especially considering the tight limits (15, 10 and 10 minutes for the 3 tasks). Of the 237 tasks undertaken by participants in this study, the time limit was hit only 37 (16% of the time).

As indicated in contents of the participants’ free response comments, CoBlox was reported as generally easy-to-use (this finding is discussed in greater detail in the section focused on ease-of-use). When asked explicitly whether CoBlox was easy-to-use only 6% of participants gave a negative response. Looking at the scores that subject achieved, their ability to finish in a timely manner, their overwhelmingly positive free responses, and their Likert-scale evaluation of ease-of-use, the data shows that the majority of the technical novices were able to author programs for an industrial robot using CoBlox.

6 DISCUSSION

Industrial Robot Programming for Novices

This study, along with our previous work with virtual robots, makes a clear case that non-experts can program industrial robots to perform simple tasks with the right set of supports. Both of our studies asked subjects to perform tasks that were similar to those performed in industry, specifically pick and place tasks. The high success rate of subjects, some of whom even commented that tasks could be more challenging, suggests that they felt confident in their ability to successfully implement even more complex industrial robot programming tasks. The relevant question now becomes: how complex a task can non-experts hope to program? This study shows that moving the robotic arm, even in a semi-constrained environment, poses little challenge. It also shows that picking and placing objects is approachable for technical novices. However, the tasks given remain simple relative to the larger world of industrial robot programming. This study did not ask users to send or receive signals from other machines, interact with vision systems, or apply any force to an object (e.g., to snap two parts together). Investigating these more advanced features, especially how to make them accessible to non-experts, is key to continuing to expand the boundaries of what non-expert robot programmers can accomplish.

Learning and Usability

One of the surprising results from this study was the lack of correlation between user performance and perceived usability. According to the SUS scores, there was only a small (and not statistically significant) difference between the CoBlox Interactive Tutorial condition and the video-trained condition while there was a significant difference in both time-on-task and correctness of programming solution (on task 1). Our explanation for this finding is that it can partially be explained by the sequencing of the tasks, where the difference in performance was greatest on the first (and most challenging) task and faded as the protocol progressed, as seen in Figure 4 and Figure 6, the performance of both groups became much more similar by the third task (Reverse). Never-the-less, it remains surprising that participants reported such similar usability numbers despite different levels of success. This has interesting implications for robot system designers. If customers are sensitive to first-time experiences, then investing in tutorial-based training would lead to better results. However, if customers are generally more persistent, then training has little effect on long term usability. A second potential explanation has to do with participant’s expectations. Industrial robot programming usually requires an advanced degree. As such, it is possible that the participants assumed they would have little success, so the fact that they were able to write successful programs with so little training can explain the relative high level of reported usability across the two groups.

The Case for Videos

While, overall, tutorial subjects performed better, especially on the first task, it is also clear from both the usability scores and diminishing difference in performance on tasks two and three that video training is also effective. This point was supported by two users in the CoBlox Interactive Tutorial condition who mentioned that “a video tutorial will come handy” and that they would like a “User-friendly tutorial – A videos guide.” At least according to these two users a video tutorial would have also been appreciated to supplement their interactive tutorial training.

For those creating training systems for robotics programming this has important implications. Implementing an interactive tutorial system adds an estimated 10 to 20% of scope to the overall programming interface. This increased investment may not always be possible, especially in contexts where profit margins are slim and development budgets tight. Our recommendation is for manufacturers aiming at the high end of the market, for the best experience, or large enough to invest in richer on-boarding supporting that interactive tutorials are worth the investment, but for others, the cost may be too high.

7 CONCLUSION

References

[1] [n. d.]. Cobots: Robots For Collaboration With Human Operators.

[2] [n. d.]. Hour of Code. http://code.org/learn

[3] [n. d.]. Online python tutor: embeddable web-based program visualization for cs education. https://doi.org/10.1145/2445196.2445368

[4] [n. d.]. Open Roberta. https://www.open-roberta.org/en/welcome/

[5] John R. Anderson, C. Franklin Boyle, and Brian J. Reiser. [n. d.]. Intelligent tutoring systems. 228, 4698 ([n. d.]), 456–462. http://www.academia.edu/download/31310363/Science_1985_Anderson.pdf

[6] Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. [n. d.]. A survey of robot learning from demonstration. 57, 5 ([n. d.]), 469–483. http://www.sciencedirect.com/science/article/pii/S0921889008001772

[7] Thomas Ball, Jonathan Protzenko, Judith Bishop, MichaAĆ Moskal, Jonathan de Halleux, Michael Braun, Steve Hodges, and Clare Riley. [n. d.]. Microsoft touch develop and the BBC micro:bit. In Proceedings of the 38th International Conference on Software Engineering Companion - ICSE ’16 (2016). ACM Press, 637–640. https://doi.org/10.1145/2889160.2889179

[8] Aaron Bangor, Philip Kortum, and James Miller. 2009. Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of usability studies 4, 3 (2009), 114–123.

[9] Aaron Bangor, Philip T Kortum, and James T Miller. 2008. An empirical evaluation of the system usability scale. Intl. Journal of Human– Computer Interaction 24, 6 (2008), 574–594.

[10] David Bau, Jeff Gray, Caitlin Kelleher, Josh Sheldon, and Franklyn Turbak. [n. d.]. Learnable programming: blocks and beyond. 60, 6 ([n. d.]), 72–80. https://doi.org/10.1145/3015455

[11] A. Begel. [n. d.]. LogoBlocks: A graphical programming language for interacting with the world.

[12] A Begel and E Klopfer. [n. d.]. Starlogo TNG: An introduction to game development. ([n. d.]).

[13] Lawrence Bergman, Vittorio Castelli, Tessa Lau, and Daniel Oblinger. [n. d.]. DocWizards: A System for Authoring Follow-me Documentation Wizards. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology (2005) (UIST ’05). ACM, 191–200. https://doi.org/10.1145/1095034.1095067 event-place: Seattle, WA, USA.

[14] Geoffrey Biggs and Bruce MacDonald. [n. d.]. A survey of robot programming systems. In Proceedings of the Australasian conference on robotics and automation (2003). 1–10. http://www.cs.jhu.edu/~alamora/ire/res/papers/BiggsProgSurvey.pdf

[15] Aude Billard, Sylvain Calinon, Ruediger Dillmann, and Stefan Schaal. [n. d.]. Robot programming by demonstration. In Springer handbook of robotics. Springer, 1371–1394. http://link.springer.com/10.1007/978-3-540-30301-5_60

[16] Rainer Bischoff, Arif Kazi, and Markus Seyfarth. [n. d.]. The MORPHA style guide for icon-based programming. In Robot and Human Interactive Communication, 2002. Proceedings. 11th IEEE International Workshop on (2002). IEEE, 482–487. http://ieeexplore.ieee.org/abstract/document/1045668/

[17] Benjamin S. Bloom. [n. d.]. The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. 13, 6 ([n. d.]), 4–16. http://www.jstor.org/stable/10.2307/1175554

[18] Tracey Booth and Simone Stumpf. [n. d.]. End-User Experiences of Visual and Textual Programming Environments for Arduino. In End-User Development, Yvonne Dittrich, Margaret Burnett, Anders MÃÿrch, and David Redmiles (Eds.). Number 7897 in Lecture Notes in Computer Science. Springer Berlin Heidelberg, 25–39. https://doi.org/10.1007/978-3-642-38706-7_4

[19] Cynthia Breazeal and Brian Scassellati. [n. d.]. Robots that imitate humans. 6, 11 ([n. d.]), 481–487. http://www.sciencedirect.com/science/article/pii/S1364661302020168

[20] John Brooke et al. 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry 189, 194 (1996), 4–7.

[21] John M. Carroll. [n. d.]. The Nurnberg Funnel: Designing Minimalist Instruction for Practical Computer Skill. MIT Press.

[22] John M. Carroll and Mary Beth Rosson. [n. d.]. Paradox of the active user. ([n. d.]).

[23] S. Cooper, W. Dann, and R. Pausch. [n. d.]. Alice: a 3-D tool for introductory programming concepts. 15, 5 ([n. d.]), 107–116.

[24] Albert Corbett. [n. d.]. Cognitive Computer Tutors: Solving the Two-Sigma Problem. In User Modeling 2001 (2001) (Lecture Notes in Computer Science), Mathias Bauer, Piotr J. Gmytrasiewicz, and Julita Vassileva (Eds.). Springer Berlin Heidelberg.

[25] Tyne Crow, Andrew Luxton-Reilly, and Burkhard Wuensche. [n. d.]. Intelligent tutoring systems for programming education: a systematic review. In Proceedings of the 20th Australasian Computing Education Conference on - ACE ’18 (2018). ACM Press, 53–62. https://doi.org/10.1145/3160489.3160492

[26] C Duncan, T Bell, and S Tanimoto. [n. d.]. Should Your 8-year-old Learn Coding?. In Proceedings of the 9th Workshop in Primary and Secondary Computing Education (2014) (WiPSCE ’14). ACM. https://doi.org/10.1145/2670757.2670774

[27] Sarah Esper, Stephen R. Foster, and William G. Griswold. [n. d.]. Code-Spells: embodying the metaphor of wizardry for programming. In Proceedings of the 18th ACM conference on Innovation and technology in computer science education (2013). ACM, 249–254.

[28] D Franklin, G Skifstad, R Rolock, I Mehrotra, V Ding, A Hansen, D Weintrop, and D Harlow. [n. d.]. Using Upper-Elementary Student Performance to Understand Conceptual Sequencing in a Blocks-based Curriculum. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (2017) (SIGCSE ’17). ACM, 231–236. https://doi.org/10.1145/3017680.3017760

[29] Floraine Grabler, Maneesh Agrawala, Wilmot Li, Mira Dontcheva, and Takeo Igarashi. [n. d.]. Generating Photo Manipulation Tutorials by Demonstration. In ACM SIGGRAPH 2009 Papers (2009) (SIGGRAPH ’09). ACM, 66:1–66:9. https://doi.org/10.1145/1576246.1531372 event-place: New Orleans, Louisiana.

[30] Tovi Grossman and George Fitzmaurice. [n. d.]. ToolClips: an investigation of contextual video assistance for functionality understanding. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2010). ACM, 1515–1524.

[31] Tovi Grossman, George Fitzmaurice, and Ramtin Attar. [n. d.]. A survey of software learnability: metrics, methodologies and guidelines. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2009). ACM, 649–658.

[32] Shuchi Grover and Satabdi Basu. [n. d.]. Measuring Student Learning in Introductory Block-Based Programming: Examining Misconceptions of Loops, Variables, and Boolean Logic. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (2017). ACM Press, 267–272. https://doi.org/10.1145/3017680.3017723

[33] Susan M. Harrison. [n. d.]. A comparison of still, animated, or nonillustrated on-line help with written or spoken instructions in a graphical user interface. In Proceedings of the SIGCHI conference on Human factors in computing systems (1995). ACM Press/Addison-Wesley Publishing Co., 82–89.

[34] M. S Horn, C Brady, A Hjorth, A Wagh, and U Wilensky. [n. d.]. Frog pond: a codefirst learning environment on evolution and natural selection. In Proceedings of the 2014 conference on Interaction design and children (2014). ACM, 357–360. http://dl.acm.org/citation.cfm?id=2610491

[35] Martin H Aďgele, Walter Schaaf, and Evert Helms. [n. d.]. Robot assistants at manual workplaces: Effective co-operation and safety aspects. In Proceedings of the 33rd ISR (International Symposium on Robotics) (2002). 7–11. https://pdfs.semanticscholar.org/57d1/758d9b6f2e08cd44d10335da992e9089786e.pdf

[36] Lego Systems Inc. [n. d.]. Lego Mindstorms NXT-G Invention System. http://mindstorms.lego.com

[37] C. Kelleher and R. Pausch. [n. d.]. Lowering the barriers to programming: A taxonomy of programming environments and languages for novice programmers. 37, 2 ([n. d.]), 83–137.

[38] Caitlin Kelleher and Randy Pausch. 2005. Stencils-based tutorials: design and evaluation. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 541–550.

[39] Ada S Kim and Andrew J Ko. 2017. A pedagogical analysis of online coding tutorials. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education. ACM, 321–326. [40] Kevin Knabe. 1995. Apple guide: a case study in user-aided design of online help. In Conference companion on Human factors in computing systems. ACM, 286–287.

[41] Andrew J. Ko, Robin Abraham, Laura Beckwith, Alan Blackwell, Margaret Burnett, Martin Erwig, Chris Scaffidi, Joseph Lawrance, Henry Lieberman, Brad Myers, Mary Beth Rosson, Gregg Rothermel, Mary Shaw, and Susan Wiedenbeck. [n. d.]. The State of the Art in End-user Software Engineering. 43, 3 ([n. d.]), 21:1–21:44. https://doi.org/10.1145/1922649.1922658

[42] S. Kock, T. Vittor, B. Matthias, H. Jerregard, M. KAďllman, I. Lundberg, R. Mellander, and M. Hedelind. [n. d.]. Robot concept for scalable, flexible assembly automation: A technology study on a harmless dualarmed robot. In 2011 IEEE International Symposium on Assembly and Manufacturing (ISAM) (2011-05). 1–5. https://doi.org/10.1109/ISAM.2011.5942358

[43] Daisuke Kushida, Masatoshi Nakamura, Satoru Goto, and Nobuhiro Kyura. [n. d.]. Human direct teaching of industrial articulated robot arms based on force-free control. 5, 1 ([n. d.]), 26–32. http://link.springer.com/article/10.1007/BF02481317

[44] Tom Lauwers and Illah Nourbakhsh. [n. d.]. Designing the finch: Creating a robot aligned to computer science concepts. In AAAI Symposium on Educational Advances in Artificial Intelligence (2010). http://www.aaai.org/ocs/index.php/EAAI/EAAI10/paper/viewFile/1849/2334/

[45] James R Lewis and Jeff Sauro. 2009. The factor structure of the system usability scale. In International conference on human centered design. Springer, 94–103.

[46] Wei Li, Tovi Grossman, and George Fitzmaurice. [n. d.]. GamiCAD: a gamified tutorial system for first time autocad users. In Proceedings of the 25th annual ACM symposium on User interface software and technology - UIST ’12 (2012). ACM Press, 103. https://doi.org/10.1145/2380116.2380131

[47] Amon Millner and Edward Baafi. [n. d.]. Modkit: blending and extending approachable platforms for creating computer programs and interactive objects. In Proceedings of the 10th International Conference on Interaction Design and Children (2011). ACM, 250–253. http://dl.acm.org/citation.cfm?id=1999074

[48] Inc. Ozobot \& Evollve. [n. d.]. Ozobot. http://ozobot.com/

[49] Susan Palmiter and Jay Elkerton. 1991. An evaluation of animated demonstrations of learning computer-based tasks. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM, 257–263.

[50] Zengxi Pan, Joseph Polden, Nathan Larkin, Stephen Van Duin, and John Norrish. [n. d.]. Recent progress on programming methods for industrial robots. 28, 2 ([n. d.]), 87–94. https://doi.org/10.1016/j.rcim.2011.08.004

[51] A. B. Pratomo and R. S. Perdana. [n. d.]. Arduviz, a visual programming IDE for arduino. In 2017 International Conference on Data and Software Engineering (ICoDSE) (2017-11). 1–6. https://doi.org/10.1109/ICODSE.2017.8285871

[52] Thomas W. Price and Tiffany Barnes. [n. d.]. Comparing Textual and Block Interfaces in a Novice Programming Environment. ACM Press, 91–99. https://doi.org/10.1145/2787622.2787712

[53] Thomas W. Price, Yihuan Dong, and Dragan Lipovac. [n. d.]. iSnap: Towards Intelligent Tutoring in Novice Programming Environments. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (2017). ACM, 483–488. http://dl.acm.org/citation.cfm?id=3017762

[54] M. Resnick, Brian Silverman, Yasmin Kafai, John Maloney, AndrAľs Monroy-HernAąndez, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner, Eric Rosenbaum, and Jay Silver. [n. d.]. Scratch: Programming for all. 52, 11 ([n. d.]), 60. http://portal.acm.org.turing.library.northwestern.edu/citation.cfm?id=1592761.1592779

[55] Kelly Rivers and Kenneth R. Koedinger. [n. d.]. Data-driven hint generation in vast solution spaces: a self-improving python programming tutor. 27, 1 ([n. d.]), 37–64.

[56] Jose Maria Rodriguez Corral, Ivan Ruiz-Rube, Anton Civit Balcells, Jose Miguel Mota-Macias, Arturo Morgado-Estevez, and Juan Manuel Dodero. [n. d.]. A Study on the Suitability of Visual Languages for Non-Expert Robot Programmers. 7 ([n. d.]), 17535–17550. https://doi.org/10.1109/ACCESS.2019.2895913

[57] W. Slany. [n. d.]. Tinkering with Pocket Code, a Scratch-like programming app for your smartphone. In Proceedings of Constructionism 2014 (2014).

[58] Juha Sorva, Ville Karavirta, and Lauri Malmi. [n. d.]. A Review of Generic Program Visualization Systems for Introductory Programming Education. 13, 4 ([n. d.]), 15:1–15:64. https://doi.org/10.1145/2490822

[59] Anselm Strauss and Juliet Corbin. 1994. Grounded theory methodology. Handbook of qualitative research 17 (1994), 273–85.

[60] Hans van der Meij and Jan Van Der Meij. 2014. A comparison of paper-based and video tutorials for software learning. Computers & education 78 (2014), 150–159.

[61] Kurt VanLehn. [n. d.]. The Relative Effectiveness of Human Tutoring, Intelligent Tutoring Systems, and Other Tutoring Systems. 46, 4 ([n. d.]), 197–221. https://doi.org/10.1080/00461520.2011.611369

[62] David Weintrop, Afsoon Afzal, Jean Salac, Patrick Francis, Boyang Li, David C. Shepherd, and Diana Franklin. 2018. Evaluating CoBlox: A Comparative Study of Robotics Programming Environments for Adult Novices. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 366, 12 pages. https://doi.org/10.1145/3173574.3173940

[63] David Weintrop, David C Shepherd, Patrick Francis, and Diana Franklin. 2017. Blockly goes to work: Block-based programming for industrial robots. In 2017 IEEE Blocks and Beyond Workshop (B&B). IEEE, 29–36.

[64] D Weintrop and U Wilensky. [n. d.]. Comparing Block-Based and Text- Based Programming in High School Computer Science Classrooms. 18, 1 ([n. d.]), 3. https://doi.org/10.1145/3089799

[65] D. Weintrop and U. Wilensky. [n. d.]. RoboBuilder: A program-to-play constructionist video game. In Proceedings of the Constructionism 2012 Conference (2012), C. Kynigos, J. Clayson, and N Yiannoutsou (Eds.).

[66] D Weintrop and U. Wilensky. [n. d.]. To Block or Not to Block, That is the Question: Students’ Perceptions of Blocks-based Programming. In Proceedings of the 14th International Conference on Interaction Design and Children (2015) (IDC ’15). ACM. https://doi.org/10.1145/2771839.2771860

[67] M. H. Wilkerson-Jerde and U. Wilensky. [n. d.]. Restructuring Change, Interpreting Changes: The DeltaTick Modeling and Analysis Toolkit. In Proceedings of the Constructionism 2010 Conference (2010), J. Clayson and I. Kalas (Eds.).

[68] DavidWolber, Hal Abelson, Ellen Spertus, and Liz Looney. [n. d.]. App Inventor 2: Create Your Own Android Apps (2 edition ed.). O’Reilly Media.

[69] Inc.WonderWorkshop. [n. d.]. Dash & Dot. https://www.makewonder.com/

[70] Zhen Xu, Albert D Ritzhaupt, Fengchun Tian, and Karthikeyan Umapathy. 2019. Block-based versus text-based programming environments on novice student learning outcomes: a meta-analysis study. Computer Science Education (2019), 1–28.

ACM Reference Format

The Effectiveness of Interactive Tutorials and Block-basedprogramming in Training End-Users to Program Industrial Robots.In Woodstock ’18: ACM Symposium on Neural Gaze Detection, June03–05, 2018, Woodstock, NY. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/1122445.1122456

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copiesare not made or distributed for profit or commercial advantage and thatcopies bear this notice and the full citation on the first page. Copyrightsfor components of this work owned by others than the author(s) mustbe honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee. Request permissions from permissions@acm.org. Woodstock ’18, June 03–05, 2018, Woodstock, NY ©2018 Copyright held by the owner/author(s). Publication rights licensedto ACM.ACM ISBN 978-1-4503-9999-9/18/06. . . $15.00 https://doi.org/10.1145/1122445.1122456

Links

Contact us

Downloads

Share this article

Facebook LinkedIn X WhatsApp