Special Issue on Education of Robotics & Mechatronics “Focusing on the Learning Process and Producing an Education Literature”
Naomi Miyake, Fumi Seto, Makoto Mizukawa, Shinya Kotosaka, and Tomomasa Sato
The world of robotics has long been trying to teach the essence of engineering by using robots as a subject. A process of robot production and operation is said to have the total essence of engineering education, and the actual manufacturing of robots has certainly been motivational for students. For this reason, robotics researchers and those in educational institutions such as universities and technical colleges in particular have worked hard on robot education. However, researchers have been faced with a dilemma: a report of educational activities, however, has not been regarded as a research paper, even though they have worked so hard on it. This special issue intends to resolve the dilemma and convey their educational activities in educational literature. From a scientific point of view, we will utilize robots in robot education to activate human resource development in robotics in terms of both quantity and quality.
Learning sciences researchers focus on what educational content they want to give and get back through various learning processes that enable such education. More specifically, they analyze what activities in what educational contexts yield what effects to what extent, thereby facilitating learning of higher quality than ever. The same attempt may be possible for education using robots. Various practices that give us hope for good results have been conducted independently. Comparison of those practices may result in the extraction of a principle to bring about learning of higher quality.
A single report itself on a practice is not likely to be a research paper. Instead, in an effort to promote robotics itself, a compilation of processes and results of various educational attempts with various clear and specific subgoals that are comparable and examinable can be a precious group of research papers from which principles of attempts in engineering education are drawn. The formation of a community in which practices are conducted, processes and results are shared, and further practices are planned is highly likely to improve the overall quality of practices. This special issue provides the first step towards such formation of a new community based on such a compilation of new research.
Educational research papers, common assets for the formation of such community, are thus required to clearly include the following contents which constitute an education literature.
• Purpose of learning: Learners with what background knowledge acquire what and to what extent.
• Activity program: The purpose of learning is achieved specifically through what activities, in what order, implemented for what period of time.
• Learning process: During the course of the program, what specific learning activities are observed.
• Assessments: Whether expected activities are made, to what extent the purpose is achieved, and whether activities and results seen are beyond what was expected.
If at least the above items come to be known from one practice, subsequent practices can be improved and new goals can be set. At the same time, the practices can be observed on other occasions. Once an original “learning score,” like a musical score, is manifested, it will be able to be improved or edited. As no two musical performances are the same as a written musical score, no two practices will be the same as one “learning score.” However, that does not mean that the “learning scores” and their processes in practice are not subjects of research. As musical scores and performances are studied for the next composition and more impressive performance, “learning scores” and their outcomes in practice are the subjects of research aimed at designing better future processes and conducting practices of higher quality. The attempts in this special issue are expected to inspire more and more elaborative papers and to improve the quality of practices that utilize the results.
This time, the first time, we had as many as 39 contributions from the public. We asked referees to select those that would constitute as diverse a collection as possible, and they finally adopted 29 contributions. We are sure that with these contributions as our point of departure, the accumulation of results will continuously engender next-dimension practices in the future. Let us now overview the papers selected this time and discuss their findings for the future contributions in three areas: purpose, contrast, and assessment. Please forgive us if the discussions overlap to some extent, but they include factors that are more related than independent.
< Contrast in Purpose and Result >
Goal setting is normally difficult in educational research. As a result, the most common mistake is that goals are set that are overly ambitious. The fact the goal of compulsory education currently set by the Ministry of Education, Culture, Sports, Science and Technology is a “zest for living” serves to indicate that the idea “the bigger the educational goal, the better” prevails. Big goals cannot be accomplished at once, however, so clearer and more specific interim goals of half-year, one-year, and four-year duration need to be set. To link the subject of research to the next practice, we have to specify as clearly as possible what interim activities and what results we expect. Otherwise, we will have vague results which do not merit papers.
An example of a real story to show what a specific goal is like is warranted here. A researcher teaching biology in a liberal arts course, in the final examination after all the lectures, tasked his students with the following: “name, in their order of importance, technical terms that you had not intended to remember but actually did during the course, and explain their definitions in that order in the time allotted.” The task may seem far-fetched, but he actually expected a certain list of terms and their explanations. He had in fact intended for the students to be able to reproduce the terms on the list and be able to explain them; he had explained them by introducing the terms in a well-prepared order, using them repeatedly, encouraging the students to use them, and setting an opportunity for the students to explain them among themselves. Such effort made this practice popular among the students and made the task performance exceed his expectations. This practice program and the report of its process could be of very high quality as subjects of research. We can carefully track which of the planned activities facilitated learning, how active they were, and what level of results were brought for each term. If the students’ answers include important items that the students learned spontaneously although the teacher did not mean to teach them to that extent, this practice clearly “produced higher results than expected.” If we can review the records of the learning process (e.g., videos of the classes, records of the students’ conversations, copies of notes taken by the students in each lecture, etc.), we can at least guess which of the activities at what point in time produced the more-than-expected results, and we can produce a specific plan for the next practice. Thus, it is very beneficial to specify the educational goals before working on practices.
< Contrast >
Learning sciences are asking less and less for so-called “control groups” for their research. This is because, for the betterment of the next practice, it is more efficient to use the time for reviewing the process of a successful practice than making a single factor comparison, because what we want to know is not whether it worked but how it worked in relation with other factors. There are many reasons a practice gives the desired results or not, and they interact. Therefore, even if a cause or contributing factor to a successful outcome is identified, it can not be the only factor behind its success.
This does not mean, however, that educational research should not compare at all, rather than that, if there are two or more paths to the successful accomplishment of a specific goal, comparing of them allows factors associated with the results to be identified more easily. In that sense, when devising a new way of teaching, it is often meaningful to compare both the processes and results of the first practice and second practice with slightly adjusted approach based on the results of the first practice.
Regarding education using robots, there is an idea that the tangible nature of robots has educational effects on students through their physical experiences that can not be expected from classroom lectures alone. Here is an imaginary example. Suppose that a class previously allowed students to deal with an object only on a computer screen, but now the students are provided with a real model. Now suppose this real model has had some effects, but not to the level expected. In the second year, the class again includes operations on the screen in its introductory period, and, after the students have discussed the robot’s operation, a hands-on experience is provided. As a result, the second practice has almost the expected results despite the fact that the discussion on the operation on the screen has reduced the time for the operation of the real object. Now, how can this result be documented in a paper? In other words, how can this practice be reported in such a way that the reporter and readers may develop the next practice? A conclusion that “a hands-on experience may be important but the length of time does not affect the results” does not focus on what class is to be devised next. First of all, such conclusion misses the point in terms of the successful experience the second time.
Now suppose that this practice is video-recorded at a level of resolution high enough to allow the comparison of the processes of the first and second classes. Reviewing the progression of the first class, we find that the model is introduced too early for the students to understand how to operate the real object and time is wasted over it. For this reason, the students utilize only about two-thirds of the expected length of time assigned to operation of the real object. Reviewing the progression of the second practice, we find that the length of time for discussion of the screen operation is just about one-third of the length of time assigned to the real object operation in the first class; that is, the length of time to operate the real object that is thought to be reduced is almost the same as that utilized in the first class. The first and the second classes are compared as Table 1.
Although it may seem to be an afterthought, the results of the comparison of the two practices will be concluded as “to effectively learn from the actual object operation, the balance between the length of time to clarify what to do in the actual object operation and then the length of time to actually operate the object is apparently more important than the total length of time assigned to the actual object operation.” Some of the papers adopted this time should have manifested such comparison for an easier-to-understand result.
< Assessment >
Assessing what was learned as a result of teaching is more difficult than normally thought. Suppose we ask students the question, “In what year was Natsume Soseki (a Japanese famous novelist) born?” Which of the students should we give a higher score: a student who answers “1867” correctly, or a student who answers, “In the middle of the 19th century, probably,” based on the year of the Meiji Restoration, the year of the outbreak of the Thirty Year’s War, and other better known years? In this case, the latter student often has more historical perspective than does the former.
Tests require various problem-solving processes, depending on the way the questions are posed (or one’s understanding of what was asked). In that sense, an assessment is defined as “an interpretation of a cognitive process that, from a behavior observed by an approach, leads to the behavior.” The “approach” here is the equivalent of the test question above (i.e., the action of asking the year of Soseki’s birth), and “the behavior observed” is the equivalent of the answer of each student. The essence of an assessment depends on what the cognitive process that drew the answer interpreted from the behavior is. In this case, the answers of the two students imply that the cognitive process of the former student may only be to refer to a chronological table; on the other hand, that of the latter may be closer to a process of making an inference from a series of relevant events structured in memory as modern history. If one needs to confirm whether such interpretation is true, the student can be asked to elaborate in the assessment, i.e., the student can be asked a couple of similar questions, and the overall tendency of his observable behaviors may be interpreted.
This suggests educational research consisting of continuous assessments rather than of a single collective assessment. In fact, the former involves elaboration but gives stable interpretations. Scoring the students by their performance over a couple of weeks of practice totaling several hours, performance that allows their behaviors to be observed, based on a confident, down-to-earth grade, is easier than scoring the students by their performance in once-a-week classes in a large lecture hall where individual reactions are invisible to the teacher and assessment is based only on the result of a single examination. Accordingly, if a practice is the subject of research, its results do not have to depend only on the final, single examination. Also, relying on a subjective assessment, which is more difficult to interpret, is not necessarily a wise thing to do. A stable assessment is achieved by continuously providing opportunities for behavioral observations in which cognitive processes are easy to interpret. Practices utilizing robots give more opportunities for student observation for the purpose of assessment than other classes (e.g., mathematics or philosophy) in the sense that the students “externalize what they think” with a real subject. Educational research on robot education should take advantage of this.
This argument can be countered by an argument that it is impossible to analyze such a great amount of data on the learning process, such as video. One of the answers to this counterargument is that it is good to keep as much data as possible to, when necessary, analyze it as needed. Now we look back the example of the imaginary classroom of experiential learning that we saw when we discussed the comparison. When we interpret the results of the two practices, if the length of time for actual object operation is focused on, we may first measure only the point in question from the video. Simply doing so ends up the discussion in the above example. Although time cannot be measured without a video record, measuring time is sufficient for one to be able to write a paper in the above example. In that sense, data on a leaning process is worth recording. In addition, if a further hypothesis, i.e., there is a difference in results depending not only the length of time but also how to use the time by each team between a group that performed a specific actual object operation and a group did not perform it, is generated, the video may be viewed again, and the occurrence of the specific activity alone can be focused on as a subject of analysis. Adding a report of learning process analysis results led by such a hypothesis not only facilitates the practice results to be brought into a paper but also serves as a strong factor that may lead to the next practice, the next educational study, and the next paper.
Those researchers who have been involved in engineering education utilizing robots may each have individual hypotheses for better ways of teaching from their own individual experiences. Clarifying the instructional goal itself is the subject of research. By being aware of a hypothesis, translating a “better way of teaching” into reality in line with the hypothesis, clarifying an instructional goal, determining from a series of learning processes to what extent the goal can actually be accomplished, and reporting all of them to leave them to cooperative examination, those rules of thumb can be turned into learning principles. This special issue is, in that sense, the first step to new learning research.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Copyright© 2011 by Fuji Technology Press Ltd. and Japan Society of Mechanical Engineers. All right reserved.