History of Game Design by Jim Kinney is licensed under a Creative Commons Attribution 3.0 Unported License.
Project Introduction by Professor Jim Kinney, School of Design, George Brown College
Faculty Knowledge Transfer
Let's begin with the end. Every form of applied research that we did in our lab, known as The Knowledge Garden had very specific outcomes which were:
- Expose students to experiential learning through active research
- Engage students directly with technologies (Beta releases where possible) with a minimum level of faculty intervention and training
- Empower students as co-researchers; Allow students to analyze and document workflows and efficacy
- Allow students to hypothesize on possible use-case scenarios in the field of design, design education or education in general
- Allow students a forum for publishing their findings and sharing these findings with key stakeholders from faculty and administration
- Demonstrate the potential for moving to a Just-In-Time learning model.
To that end, from May through June, 2013, knowledge gleaned from student explorations in using Augmented Reality was transferred to faculty from the schools of Fashion, Jewelery and Graphic Design in an attempt to explore the potential for incorporating its use into the teaching and learning ecosystem. We approached this as a means for extending our notions of what a class is and how knowledge may be represented within that space and how knowledge could still exist within the physical confines of a class regardless of whether or not the professor was there. It enabled us to to explore the creation of environments of discovery where students could be encouraged to explore the class space and uncover hidden gems of learning—transforming passive learning into highly active and engaged forms of treasure hunting or exploration that would be predicated on a student's curiosity and desire to know.
In addition to exploring AR tools, faculty were shown how to create a course WIKI using our Apple WIKI server. We discussed the importance of this resource for documenting and curating learning activities.
Tools Used:
Hardware:
- Mac Pro
- MacBook Pro
- Snowball Microphone
- iPod Touch
- iPad 2
- Nikon DX
- Apple WIKI server
Software:
- Adobe Illustrator
- Adobe After Effects
- Adobe Premiere Pro
- Adobe InDesign
- Adobe Media Encoder
- Aurasma App
- Aurasma Studio (media and distribution management web interface)
Below are some examples of potential applications for faculty:
Classroom Introductions:
The purpose of this was to create a graphic avatar of each professor that, when pointed at with a smart device, would trigger a video introduction featuring the actual professor talking about their availability.
This exercise introduced the faculty members to Capturing video on greenscreen, audio recording, video and audio editing in Adobe After Effects and Adobe Premiere Pro. They also performed "Keying" that enabled their live action video to mesh with their graphic avatars by removing unwanted background information. They recorded their videos, edited them, keyed out the backgrounds and merged them with their graphic avatars that they created in Adobe Illustrator.
Printouts of their avatars could then be posted outside their rooms with instructions on how to use the AR software AURASMA in order to experience their video intros.
The "trigger" avatar images and the short video introductions were uploaded and bound together using the Aurasma control panel.
Introductory Video by Professor Carolyn Perry-Donnen (CLICK to view):
How-To Videos
Simple "trigger" images were generated against a background informing the user how to access the AR experience. These images would then trigger a brief demonstration video on a particular subject matter. These triggers could be printed and distributed to students in loose leaf form, bound in a booklet or placed up around the class or school.
HOW-TO VIDEO that launches after scanning the above poster image:
NOTE: If you wish to try this out, DOWNLOAD and PRINT the TRIGGER IMAGE above the video then DOWNLOAD the AURASMA APP. CLICK on the AURASMA ICON AT BOTTOM (looks like a tent) then CLICK the MAGNIFYING GLASS, search for "GBC Sewing with CPD" or follow this link: GBC Sewing with CPD (link broken) CHANNEL and FOLLOW it. This will load an AURA of a thimble, needle and thread (as pictured above the HOW-TO video).
You are now ready to SCAN the PRINTED IMAGE. CLICK on the SCAN ICON (4-cornered square in the middle of the toolbar at bottom of your screen). FRAME the IMAGE within the 4 corners of your screen. You will see a TWIRLING PURPLE SPIRAL that indicates the video content is loading. ENJOY!
RESEARCH SYNOPSIS
Connexions AR Story
Augmented Reality
The term augmented reality (AR) itself conjures images of popular science fiction stories like Blade Runner or The Matrix and, quite frankly, it incorporates aspects of futuristic information access and delivery.
Wikipedia defines augmented reality as follows:
Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality.[1] By contrast, virtual reality replaces the real world with a simulated one.[2][3] Augmentation is conventionally in real-time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Artificial information about the environment and its objects can be overlaid on the real world.[4][5][6][7]
How does it work?
In lay terms, augmented reality is a way of geocaching audio-visual (AV) and text information that can be accessed by a smart device such as an iPad or smart phone. Information is stored on a server and the av materials on that server are accessed via a “trigger image” or “marker” that AR software on the mobile device recognizes. Once the unique location and “visual fingerprint” of the trigger image are determined this information is passed back to the server and used to reference the matching AV materials that are indexed on the server. The AV materials are then passed back to the mobile device and are “overlayed” on top of the scene that it is recording. This overlaying ability allows the visuals from the server to lie on top of the scene in front of the mobile device. This allows content to seemingly pop out of the marker or trigger image.
This software has become increasingly sophisticated over the last 5 years. Initially, “Quick Response” or QR codes (those funny little, digital squares you see on billboards and in newspaper pages that advertisers implore you to scan) were used as markers to access AV information and websites; however, with advances in facial, object and scene recognition software, devices are capable of recognizing almost anything and driving video to the location where the object is found. Further enhancements allow the viewer to access additional features such as websites, online order forms, etc. by simply tapping their screens. This ability to overlay “tappable” zones on top of an AR scene has allowed for two-way interaction with the object being scanned.
What are some of the applications?
BMW has used this technology quite extensively to provide contextual information in situ. Mechanics can simply view an engine component through special goggles that can then feed AV materials to them on the very part that they are looking at—informing them how to remove, repair, and/or replace the part. In short, this allows us to “tether” intelligence to otherwise “dumb” objects and spaces. The implications for exhibit design are also considerable. The ROM augmented their dinosaur exhibit in a way that brings their dinosaurs to life by overlaying video animations on top of their collection in ways that can bring long extinct specimens to life. Imagine pointing your smart phone at a bronze sculpture at City hall and have a documentary on the making of “The Archer” by renowned sculptor Henry Moore. Imagine visiting Mackenzie House at University and Queen and pointing the same device at the fireplace, or a chandelier and have the Ghost of William Lion Mackenzie give you a personal tour of the house while providing choice tid-bits of information on the genesis of our country!
What are some of the tools?
There are a number of tools out there, some of which allow you to get up and going for free. Perhaps the most powerful tool is from a game-development platform called Unity. Unity, along with a plug-in to a tool called ARToolkit allows developers to custom build immersive, interactive content that meshes with real world geometry in such a way that video can be “skinned” onto and conform to the real world perspective of the video camera. Unity and ARToolkit have free trial versions however, they are not for the faint of heart or the newbie. These are ostensibly developer environments and are fairly technical interfaces. The cost of creating custom-branded environments can run over $100,000.00! If you are wondering “Why a game development environment?” It comes down to the fact that creating a blend of the real and the virtual that allows for complex scenarios of tracking actions and reactions, providing hints, scores, incentives, etc. are the hallmarks of gaming.
Click on the GREEN BUTTONS to access the URLs.
There are much simpler AR tools out there that would suit the novice (and budget). Aurasma, for one, allows you to upload images and videos to their server using a very simple dashboard either on your smart device or on your desktop. You simply link your trigger image (it could be a photograph of a sculpture) to your video (a documentary on Henry Moore) then you geo-located to within a few metres using Google Maps. Once you have uploaded, linked and geolocated your assets to the Aurasma server you have created what they refer to as an “aura.” These auras can then be aggregated under a channel that works much the same way as a specialty cable channel. Imagine creating a channel called “Sculptures in Toronto,” where you have amassed a collection of trigger images and related documentaries on sculptures throughout the city. Like specialty channels, people can subscribe or “follow” your channel. Interested audiences can then look at the thumbnails of your trigger images, click on them and find out where exactly they are located. They can travel to that location and look for the corresponding objects and/or trigger graphics then scan them to access the premium content overlays.
The GPS component also allows Aurasma users to launch the App on their device and query what content is nearby. Try searching for it in your Android or Apple App Stores and download it to start searching your area for hotspots. There are quite a few of them around town. You are walking through a veritable mushroom cloud of video information that you may not be aware of!
It is not always desirable to have your AR assets tied to a fixed location. What if your trigger image is on a TTC advertisement inside a moving bus? What if you wish to make a printed book that gives a student access to supplementary video materials? LAYAR is a company based in the Netherlands that has tailored an unique solution. Regardless of location, LAYAR allows you to create exquisite print-based materials and upload them to their server. If the consumer downloads the LAYAR app to their mobile device, they simply scan the printed page, transit add or billboard (you were wondering why there are so many accidents reported on our GTA roads?) and the layar server is able to recognize the unique visual signature of your print materials and serve up video assets and mesh them right over top of the publication materials. The interface also allows you to embed questionnaires, links to online commerce, email and more. This has been a definite boon for re-invigorating a struggling print industry and holds out promise for getting our students back into books! With 30M downloads so far, it looks like this platform is gaining traction.
Blippar is another tool that works pretty much the same as Aurasma although, to date, they haven’t been that receptive to allowing free educational trials (although I wouldn’t discourage you from trying). The one significant and important difference between the two is that once the Blippar software recognizes the trigger image, you can walk away from the trigger and it will still continue to deliver video. Currently technologies like LAYAR and Aurasma require the viewer to continually hold their device up to the trigger to establish contact and drive the AR materials to it. Although I think that these companies will address this shortcoming in future releases, at present, this limitation can be rather annoying when you have a number of people waiting for you to consume the two minute video. Which raises another important limitation—video length. Typically videos should be tailored in accordance with the MTV standard of short and sweet. Large video files eat up far too much bandwidth and typically exceed the viewer’s attention bandwidth as well. This is not a criticism it is simply a fact that our students are content surfers who, if not captivated, entertained or enlightened in a short span of time, are culturally conditioned to move on to smaller and better things! Also. Given the fact that content is typically consumed on a small screen there is no need for 1080p HD resolution. All AR assets should be created with this small real estate in mind.
How are we using it?
I began a pilot into the use of these technologies in the Knowledge Garden with students in my Knowledge Design 1 course. We worked to support the end of year show for game design entitled “gamER.” My students created short, two minute documentaries on seminal video games in the history game design. The documentaries were cut together in the Adobe Premiere Pro video editing suite with screen captures taken from live gameplay. They wrote scripts and recorded audio voice over tracks in Adobe Audition that they then synced to the video. They created monochromatic images of game logos and characters that were then vinyl cut and used as trigger images. These vinyl triggers were then installed along the hallway pillars on the 5th floor of our game design building as part of the exhibit. There was an interesting impasse that occurred between the needs of the AR objected recognition software and those of the vinyl cutter. Vinyl cutters rely on simple, abstract graphic outlines devoid of excessive detail. AR recognition; however, requires lots and lots of data in order to provide an unique visual fingerprint. If things are too simple and too much the same, the graphics will not trigger a recognition event and nothing will happen. After much experimentation and a lot of hard work on behalf of Sisley Leung and Jenny Park at the IWB, a happy medium was reached.
We ran into the same problem with another deliverable. The students also produced short personal avatars that looked like highly pixelated characters from an old video game. When the exhibition attendees would point at their avatars, the avatars would dissolve to reveal the real person who then started introducing themselves and telling the audience what their first video game was and what they were currently playing. Initially, the avatars were too similar and the software kept confusing the individuals and playing the wrong biographies. Triggers sharing the same location need to substantially detailed and unique in order to trigger successfully.
We also managed to utilize the LAYAR platform to create a printed book that delivered both traditional print based information as well as video content on the important games in the history and evolution of gaming.
The Pedagogy
This mode of delivering educational content certainly fulfils the requirement of accessibility. Information can be contextualized by location and the thing being scanned. It speaks in a technical vernacular that the students find fascinating and improves engagement. I had the opportunity to run over to the building at the year end show to see throngs of students lining up with their devices and watching the videos intently. As one student commented (I paraphrase) “If only we could watch lessons this way, it would be really cool.”
If only we could watch lessons this way, it would be really cool.
As this technology matures further personal contexts such as user preferences, search patterns, location, etc. will provide an opportunity to customize the delivery of relevant and highly personalized experiences to the individual. Consider a student of architecture walking through the streets of our city on a virtual scavenger hunt for seminal features and styles of building. Imagine that, as they approach a key example, an alert pops up urging them to scan aspects of the building, watch relevant documentaries on these things then tap and submit answers to queries. It is a place where conventional notions of time, space and the classroom are completely transformed. The learning and the intelligence is embedded in and around us, ready to be explored. It allows us to see the classroom not as a specific geographic location, rather, as an environment of discovery whose internal and external dimensions are ever expanding and changing and adapting to the context of the participant.
Knowledge Transfer
I am currently working with faculty members in the school of Design and Fashion to create augmented reality experiences for their students.
I would like to thank the following students. Without their research and the application of the skills that they learned, there would be no AR exhibition:
Dyllin Aleluia, Zainab Batool, Mike Bastin, Kevin Chow, Evan Gerber, Kimberley Hall, Christopher Jetten, Andrew Kim, Victoria Kosecki, Maija Ksander, Linda Lamelas, Palestrina McCaffrey, Julian Ng, Sanjay Pinnock, Caroline Pursuk, Cassandra Savarino, Vanessa Valela
History of Game Design AR Exhibit
Beginning in January 2013 Professor Kinney and his students set out to explore the use of Augmented Reality in Education. The context was supporting a physical installation of an exhibition on the History of Game Design on the 5th floor of George Brown College's newly opened School of Game Design in downtown Toronto.
Tools
Hardware:
- Apple: MacIntosh computer, iTouch, iPad and iPhone.
- Samsung Galaxy Tablet and Smartphone running Android OS.
- Whisper Room recording booth.
- MBox Audio Interface.
- Røde Broadcast quality microphone.
Software:
- Adobe: Illustrator, Photoshop, After Effects, Premiere Pro, Audition, InDesign.
- Apple: iTunes, QuickTime, Garage Band.
- AR.
- Aurasma
- Layar
Visual Thinking/Mapping Tools:
- Webspiration.
- Simple Mind Map.
Process
Concept Development:
Students began by mapping the ideas connected with the notion of games and gaming in order to develop a conceptual framework and branding for the exhibit:
Branding:
Students mapped their concepts to a brand name and visual that embodied their ideas related to the nature and evolution of gaming and one of the ideas was chosen to represent the exhibit. In this case it was the game[er] brand put forward by Evan Gerber. Evan focused on the connection between the person or "gamer" and the electronic and entertainment revolution that the genre spawned.
Workflows
Complex workflows (link broken) were visually mapped to help identify key steps in the production process. Participants had to develop, capture, edit, brand and format video and audio content then "bind" this "overlay" content to a "trigger" image that they created.
AR specific workflow maps were also generated to aid in the production process and to provide infographics for content consumers.
Content
TIMELINE:
Researchers at the IWB created an extensive timeline detailing the history and evolution of game design organized into hemispheres of BUSINESS, CULTURE, ICONIC GAMES and TECHNOLOGY. Participants chose a game from this timeline to feature in their creation of AR-accessible content.
INTERVIEWS:
Having chosen a game from the timeline, students could chose one of two scenarios for the creation of overlay content. They were encouraged to contact the designer(s) of the game that they had chosen and ask them to answer 4 key questions:
- Why did you build the game that you created? (philosophy)
- If you could go back in a time machine, what would you have changed about your game?What game do you wish you designed?
- What game are you currently playing?
- Where do you see the future of video games?
Interview with Rob Kay, Lead Designer on the creation of the best-selling game of Guitar Hero:
Figure 6. This shows the power of this technology to bring expertise to the seeker in-situ at a time and a place convenient to them. AR applications have the potential to break free of the constraints of time, space and scheduling.
DOCUMENTARIES:
If students were unable to contact the designer for an interview they were required to produce their own short, two minute documentaries on their chosen game. We developed a common script framework in order to maintain continuity of experience from one video to the other and to ensure that branding was standardized.
Figure 7. Evan Gerber's documentary on Half Life combined screencasts of gameplay with a scripted voice over, ambient sound as well as standardized intros and outros.
TRIGGERS:
Students were challenged with creating trigger images in black and white that would provide enough visual data to enable the AR software to differentiate and generate an unique visual signature for each graphic and, yet, make it simple and robust enough to be vinyl cut in one colour for transfer to the exhibit walls.
Team Credits
We decided to introduce the members of our AR exhibit design team by providing a peek into their own personal gaming universe. Each participant was asked to respond to the following prompts: What was your first video game? and What are you currently playing? The answers to those questions would be presented in an AR format for exhibit goers.
Green Screened Interviews:
Participants were shot against a greenscreen answering the prompts about their gaming preferences. They were wirelessly micd. and shot on a 35mm camera.
Avatars:
After the video was shot the participants generated an avatar of themselves that matched their clothing and their opening poses. The idea was to have the avatar trigger an overlay video that matched the the live action in a way that the live action subject would dissolve through the avatar once the AR was triggered.
Finished Avatar Video and Key Issues:
Figure 8. The avatar graphic and the green screened video were combined in Adobe After Effects and output to a High Definition format. The HD assets were then opened in Apple QuickTime and EXPORTED for a variety of web delivery formats. The "Cellular" format created the smallest, most manageable file sizes. Finding an appropriate balance between video length and degree of compression will ultimately determine the bandwidth consumption. Given that your audience may be piggybacking on your wireless or using their own data, bandwidth should be kept to a minimum. This also helps to reduce lag times for loading of video overlay contents. NOTE: It is important to make sure that your videos include POSTER FRAMES that can be referenced within the AR publishing interface otherwise it is impossible to visually establish the contents of any of your overlay videos.
Publishing in Aurasma:
Once the Assets (Audio, video, trigger images) have been created they are loaded and organized in the AR environment.
NOTE: More detailed workflow information can be found near the end of this page.
Testing:
Once auras were generated from the trigger images and video/audio/web assets were uploaded, the Aurasma application was downloaded onto Apple iTouch devices, iPhones, iPads and Android smart devices. Once the App was downloaded the student researchers were able to search for and follow the History of Game Design channel. Once the channel was followed all of the relevant auras and their associated content were downloaded to the app. The associated trigger images work as hints to show the seeker what images in to search for in order to trigger the premium content.
Figure 10. Student Kimberly Hall points an Apple iPod Touch device at a printed Avatar that is used to "trigger" an augmented reality (AR) experience. This launched a video that began with the same avatar graphic (See above) that quickly dissolves to reveal the actual person standing against the same coloured background. The person shown in the video is student Linda Lamelas sharing what her her first video game was and what she is currently playing.
As part of our testing, Student Kimberley Hall accesses video by pointing an Apple iTouch running the Aurasma App. The video (foreground) seamlessly overlayed on top of the printed avatar also known as a "trigger" image. The trigger image is "bound" to a related piece of video, audio or a URL to generate what is called an "aura" which can then be geo-located, socialized, shared and distributed via a channel (we created a channel called "History of Game Design." The on-screen content can also have "Tappable" areas defined and linked to specific web pages via a URL.
Postscript
It was amazing to see throngs of people actively engaging in learning that had exploded beyond the traditional confines of the classroom. One student lamented "I wish we could learn like this." To which I added. "That is entirely the point of this exercise." This experiment allowed us to pave the way for new models for creating and sharing knowledge." The experience has encouraged me to reconsider the locus of learning as well as our conventional notions of time, place, scheduling and delivery with respect to teaching and learning.
The invaluable student contributions also points to the emerging importance of the student voice in learning—their direct participation in the construction of meaning and content in the learning environment seems to provide a more authentic and engaging experience for the participant.
The learning is always there, waiting for the intrepid explorer to find it and uncover its bounty. The notion of geocaching learning invites comparisons to a treasure hunt. Exploring the hallways of our school with a smart device is a little bit like having those X-Ray specs that they used to advertise on the back of popular comic books years ago. Our space is bristling with information you just have to know how to look!
Below is a short student sequence documenting their interaction and impressions of the medium.
Click the link below to follow a more detailed workflow associated with creating the AR exhibit:
AR WORKFLOW
There were two main technologies that we explored in this research project: Aurasma and Layar. Aurasma has the capacity for geolocating your AR experiences and publishing them to a channel that can be socially shared—essentially geocaching content for discovery by an audience. Layar, on the other hand has a strong print focus. It prefers to connect AR to print assets hosted on the LAYAR server. The material is not geocached—allowing the content to be non-location based and, therefore, highly mobile.
WORKFLOW
Below is a PDF that visually illustrates the major steps in authoring an AR experience using Aurasma. Feel free to download and print the workflow for your reference or CLICK on the EYE icon to preview its contents.
AVATAR VIDEOS
Students created an avatar of themselves to use as a trigger image that would run a video overlay of their live action selves sharing information about their gaming habits—their first video game and what they are currently playing.
Greenscreening
Students recorded their introductions to their gaming preferences against a greenscreen using a NIKON DX with a wireless lavalier microphone attached to them in order to record their audio.
Avatar Creation
These representations were meant to mimic the pixelated look of vintage, low resolution video game characters. Below is a composite featuring all of the team members' avatars.
- content missing
Keying/Rotoscoping
This technique refers to removing the background of a scene (in this case the greenscreen area). We wanted to have the live person dissolve into the cartoon and appear to replace the avatar character with the actual video of that person. We used Adobe After Effects to achieve this.
TECHNICAL NOTE: After Effects defaults to NOT exporting the AUDIO CHANNEL. When creating the final output you go to the COMPOSITION menu > ADD TO RENDER QUEUE and SELECT the OUTPUT MODULE setting and check off the AUDIO OUTPUT option at the bottom of the sub menu (see figure below).
Matching Live Action Video to Avatar
This was done in Adobe After Effects in order to match the size of the live action person to the avatar graphic.
Creating an Aura
This involved using the Aurasma Studio Dashboard (Above). I contacted the Aurasma team and asked if they would provide free access to their service for educational purposes and they were more than happy to accommodate my request. The Aurasma studio allows you to manage all of your video assets and trigger images. The Aurasma studio enables you to "bind" a trigger image to a related video "overlay" in order to create what they refer to as an "Aura."
You load your videos (overlays) and trigger images up to the Aurasma server into your account, you then connect the triggers to their related videos and you then "train" the aurasma software to recognize your trigger image.
Loading and Training Aurasma to Recognize Trigger Images
This image shows a mix of triggers for both the avatar videos as well as vinyl cut graphics used to trigger documentaries on the history of game design (Black and White).
Loading Overlays (videos)
This image shows the video overlay window where you can upload video and they play back in thumbnail mode in real time.
Creating Auras: Binding Videos to Trigger Images
The image above shows a list view of all the trigger images and the auras associated with them. It also shows which channel the various auras have been published to. By double clicking on the listed auras you can access an Edit window where you can adjust position and scale as well as interactivity.
Editing Auras: Adding Multiple Overlays and Interactivity
In the Image above you can see a preview window on the right that shows how the video overlays on top of the trigger image. You can adjust for position and scale to align the two assets. Note that on the bottom left section you can add multiple overlays as well as define actions associated with tapping on your device screen that are synonymous with particular overlays in your Aura. This feature allows the user to tap an area of the screen in order to go to another web site, make the overlay full screen, pause the overlay action and more.
Student Kimberley Hall tests a prototype of our printed triggers on an iPod Touch. She is scanning the image within the Aurasma interface.Note the spinning spiral that indicates the loading of content associated with the trigger image.
TECHNICAL NOTE: placing triggers behind transparent surfaces such as glass is problematic. Subtle differences in reflections can create sufficient signal discrepancies to render the trigger ineffectual. Flat, matte surfaces with constant, even lighting work best.
Geo-Location
This is analogous to geocaching of interactive video content. Your content is location based and "discoverable" by augmented reality diggers who are nearby. Unearthing AR content bears all the characteristics of a treasure hunt.
By double clicking on the Trigger image name in the trigger window you can access the edit menu shown above. This has a feature that allows you to geo-tag your triggers using either GPS coordinates or by indicating location on a Map API. This allows your content to be tied to a particular location. This would be good for site specific education such as offering premium content to visitors in an art gallery or museum.
Publishing
Aurasma allows you to create "CHANNELS" that you can publish content to. These act just like typical TV channels that people can search for and FOLLOW. We published our material in the History of Game Design Channel. We made this channel public so that the public could find it and connect to our content. Private content cannot be discovered and the publisher must send email invitations to recipients in order for them to access the private content. This could prove useful for hosting gateways to sensitive information in very public spaces. One example might be graphics that, when scanned, provide information on alternative access points to buildings for firefighters or other emergency services.
In the image above you can see multiple channels hosting a range of related content. These channels can be public or private and can be branded with an icon for visual identification. If public, diggers can search for and subscribe to these channels and have the contents of those channels streamed to their smart devices. Note that each channel shows you how many auras or augmented reality experiences/triggers are associated with each channel.
Socializing
Content can be favourited and promoted in social platforms such as Facebook in order to promote awareness of content nuggets that are available in certain regions.
The image above shows how you can push new content announcements out to your networks in social media like FaceBook.
The User Experience of the Content Prospector:
Using smart devices to access content.
The series of images below details how a user can access and use the Aurasma app to discover and engage with Augmented Reality experiences.
If you are ever in the Toronto area, please drop by the George Brown, School of Game Design at 341 King Street East, 5th floor and discover the learning that silently and invisibly clings to our walls!
Subsequent to this project, I used what my students and I had learned to engage a small group of Jewelery, Fashion and Graphic Design faculty with AR in an educational context. We worked on creating short demonstration videos tied to trigger images that they could post in their labs. CLICK THE BLUE LINK BELOW TO SEE THE OUTCOMES OF THIS PROJECT.
FACULTY KNOWLEDGE TRANSFER
I would like to hear from anyone else who is using this technology in a teaching and learning context.
Regards,
Jim Kinney
TWITTER @prof2go
LinkedIn Group:
Moderator for ARE (Augmented Reality Educators' Group)
Paper.li Publications:
RESEARCH ARCHIVES
This is an archive of class activities for the period of January, 2013 to April, 2013.
Influential Video Games
REVIEW OF TIMELINE:
Sisley and I have prepared a history of videogame in chronological order in word file.
TimelineHistoryofVideoGame (docx)
Everything seems OK to me....but I am not a game officionado. Jim Kinney
They are not categorized into any groups yet (ex: games, consoles, controllers, designers, etc.), but we are going to categorize them (within this week) by color-coding them.
Could you be able to look over this and just check if those informations are right? Also, see if we are missing anything or edit stuffs that are not necessary?
Also, we are keeping all Video Game Exhibit related material in richmond campus server. They are under:
STORE 2012 > 03.Jenny > Videogame Mural 2013
CHOSEN GAMES:
Post which game you've chosen. If someone's already chosen it, too bad, y'all.
- Julian Ng: Street Fighter II
- Palestrina McCaffrey: Super Mario 64
- Kimberley Hall: Mario Kart 64
- Evan Gerber: Half Life
- Caroline Pursuk: Tetris
- Andrew: Starcraft
- Christopher Jetten: Legend of Zelda Ocarina of time
- Vanessa Valela: Super Mario Bros
- Maija Ksander: Sonic 3
- Zainab Batool: Journey 2012
- Sanjay Pinnock: Angry Birds
- Linda Lamelas: Bubble Bobble
- Cassandra Savarino: Pac-Man
- Dyllin Aleluia: World of Warcraft (wow)
- Victoria: guitar hero
- Kevin Chow: Pokemon
- Mike Bastin: Fallout
- Nathan Brown: DOOM
For anyone looking for more (if you haven't chosen yet) here's a starters list of things to look at
- Space invaders 1978
- Pac-Man Ultima 1980
- Tetris 1985
- Dune 2
- Crash Bandicoot
- EverQuest
- Zork
Games continuing to release today
- Grand Theft Auto
- Final Fantasy
- The elder scrolls (some may know these as the more recent releases like Oblivion or Skyrim)
- Pokemon
- Fallout
- Starcraft
- Call of Duty
- Halo
- Rock band/ guitar hero
- Wii fit/ other fitness based games
- Mass effect
Deliverables & Assesment Criteria
Below is a link to a URL that provides a detailed map showing the course workflow and associated grade weighting:
http://www.webspirationclassroom.com/view/1283346a33ecb - link broken
Exhibit Theme
Use this area to ruminate on the possibilities for the theme of the exhibit by authoring directly here (pencil icon) or adding commentary (right sidebar).
Potential seeds:
FUN, RECREATION/PLAY, RISK, STORY/NARRATIVE, DRAMA, CHARACTER, PARTICIPATION, CHANCE, SKILL, STRATEGY, TIME, PLACE, MIGRATION, SIMULATION-SIMULACRUM, IMITATION, IDENTITY, DREAM, RESOLUTION, AUGMENTED REALITY, CO-HABITATION, CO-AUTHORING, HABIT, DEVELOPMENT.
Origins:
Interesting to note that the dream-space of cave art was often rendered approx. 300-400 metres into the interior caves, were highly realistic and rendered on concave and convex surfaces, allegedly to bring a 3D effect with the interplay of torchlight. (See Werner Herzog's "Cave of Forgotten Dreams"). Also worth noting are the theories of David Lewis WIlliams in The Mind in the Cave: Consciousness and the Origins of Art
"Emerging from the narrow underground passages into the chambers of caves such as Lascaux, Chauvet, and Altamira, visitors are confronted with symbols, patterns, and depictions of bison, woolly mammoths, ibexes, and other animals. Since its discovery, cave art has provoked great curiosity about why it appeared when and where it did, how it was made, and what it meant to the communities that created it. David Lewis-Williams proposes that the explanation for this lies in the evolution of the human mind. Cro-Magnons, unlike the Neanderthals, possessed a more advanced neurological makeup that enabled them to experience shamanistic trances and vivid mental imagery. It became important for people to "fix," or paint, these images on cave walls, which they perceived as the membrane between their world and the spirit world from which the visions came. Over time, new social distinctions developed as individuals exploited their hallucinations for personal advancement, and the first truly modern society emerged."—Editor's forward in David Lewis Williams' "Mind in the Cave"
Here in the age of Iconic forms of communication there is a process of "digitizing" information using symbolic structures to represent data and meaning. It is markedly less "Analogue" than the dream space of the cave and visuals reduce in complexity and realism and boards on which simulations of real life activities are embedded have embraced a BIT MAP approach—the traditional grid of most games with horizontal and vertical squares. This represents a move from the more data extensive/intensive analogue form of experience to one whose information is reduced in resolution (digitized) in order to provide a more universal, easier to encode and control form of information and experience. It is interesting to reflect on this with respect to the trend in game environments. Are we headed back to the cave—to highly realistic instantiations of the dream with their aural, optical and somatic hallucinations that haptic systems try to recreate?
Video Interviews
Video Interview Guitar Hero
NOTE: Video posted here should, ideally, be Encoded as H264/Mp4
Questions to Ask Game Designers (CLICK LINK)
Letter to Designers
PLAN B: Documentary
DOCUMENTARIES
Mike Bastin's Blog post for help with video capturing and installing/ using Emulators for game-play documentaries.
Video Capturing and Emulator Help (pdf)
LINK TO SCRIPT
Permissions on Copyright
Zainab Batool
JOURNEY 2012
Andrew Kim
Starcraft
Nathan Brown
DOOM
Palestrina McCaffrey
Super Mario 64
Cassandra Savarino
PacMan
Maija Ksander
Sanjay Pinnock
Kevin Chow
Dyllin Aleluia (No Logo or gameover yet)
MikeBastin (no logo or gameover sound)
Mike Bastin Final DOC
Julian Ng Street Fighter 2
Kimberley Hall
ipad compatible
SCRIPT
ASSETS:
- Screencast of game in action. AKA GAMEPLAY video.
- Game creator LOGO.
- Still Images SCREENGRABS.
- GAME AUDIO/SOUND BYTES/SIGNATURE SOUND.
- VOICE OVER
- Picture of Designer.
- Game MODS/ADD ONS
- SCRIPT.
SCRIPT for GAME/GAME DESIGNER DOCUMENTARY (RUN TIME approx. 2 minutes):
OPEN ON
- TRIGGER IMAGE CHARACTER OR (if no character available)
- AUDIO: SIGNATURE GAME SOUND
- GAME CHARACTER/LOGO ANIMATES (optional)
- GAME TITLE (FADE IN)
- AUDIO: GAME MUSIC (FADES IN)
- VIDEO/ Transitioning STILLS of GAMEPLAY in action.
- VOICE OVER (VO):
{gamename} was developed by {companyname} of {location} and released on {date}. {gamename} was released on {gameplatforms}. This game is historically significant because {detail the unique ground-breaking qualities of the game}
- FADE IN: PICTURE OF DESIGNER.
- VOICE OVER:
Game Designer {designername} from {country/city} {details about unique aspects of the designer, their philosophy and/or their approach to the aesthetics, gameplay, etc.}
- MONTAGE OF VIDEO AND/STILLS OF VIDEO ACCOMPANY THE VOICE OVER.
- FADE TO "GameEr" SHOW LOGO.
CLOSING SHOT: "GAME OVER VIDEO" See below:
LAYAR ASSETS
Layarmagazine_layoutfinal (indd)
Archive (zip)
Please start posting your summary text here by clicking the link and pasting text there. Be sure you include your name at top and post 1-2 screen grabs of your game and maybe even coloured logos for the game and game manufacturer.
Summary Text & Other Assets
Layarmagazine_layout (indd)
Please upload your text in the BOX ON THE LEFT and your still photo of the game in the BOX ON THE RIGHT. thanks:)
gam[a_er]_logo (pdf)
LAYAR "gam[er]" ICON 128x128.
TESTING
Exhibition Themes
GAME DESIGN-Zainab Batool (pdf)
Theme_name (pdf)
SavarinoC_Logo (pdf)
Linda Lamelas (pdf)
PursukCaroline_conceptExhibit (pdf)
Caroline. This name is taken. See the Mozilla posters all over the game design building.
gamER (pdf)
Pixel (pdf)
** I like the pixel- farthest right .
startlogo (pdf)
AndrewKimGameDesign (pdf)
NgJ_exhibitRoughs (pdf)
BastinM_concept (pdf)
wow (pdf)
kevin logos (pdf)
VanessaValela_GameExhibitTheme (pdf)
*i really like game on-top one
SONIC history of game design (pdf)
GameExhibition2013 (pdf)
ZAINAB BATOOL_GD (pdf)
TRIGGER IMAGES
Those of you who delivered your trigger images on time had your submissions reviewed and re-engineered in some cases by Sisley and Jenny in order that they would both translate into vinyl more successfully as well as integrate into a comprehensive aesthetic.
Pick a main character from the game that you have chosen and render in illustrator and post in here.
We have access to a vinyl cutter to create signage and transfers of our characters that can be placed around the exhibit space to trigger the AR. Keep the artwork relatively simple and abstract (this works best for object recognition). For vinyl cut we are looking at single primary colours. The challenge for the newer, highly realistic game characters will be interpreting them essentially monochrome line art for the cutter.
AVATARS
AVATAR-PDFS 2 (zip)
Post your avatars here with your name below and your handle ie. Jim Kinney AKA Profzilla! They should be 600x600 pixels@300ppi CMYK, PNG format. with a flat background that uses one of the GBC logo colours.
FORMAT YOUR AVATAR VIDEOS AS PER MY EXAMPLE BELOW. PLEASE UPLOAD USING THE ATTACHMENT ICON and also make a backup in the VIDEO>MyFirstGame_avatar video folder as well.
ADDITIONAL ASSETS:
AVATAR VIDEO SIGNATURE SOUND:
Embed this sound at the beginning of your avatar video so that it plays as the video loads.
AVATAR GAME OVER VIDEO:
Place this at the END of your avatar video.
Below is a ZIP file containing the Final AVATAR PDFs. PRINT A FEW TESTS TO VELUM/MYLAR
final_avatar_pdfs (zip)
AVATAR VIDEOS:
Cut together your video in AFTER EFFECTS using Palestrina's video tutorial (BELOW) as a guide. Open on the avatar, PLAY SOUND (Below) then DISSOLVE to your video (Should match the sizing of your avatar).
Zainab Batool
Kimberley Hall
Kevin Chow avatar (this took 2 weeks worth of attempts to work since this website seem let me log in and upload most the time)
HOMEWORK
WORKFLOW For Course (Preparation for March 13th)
Assets:
Post your assets to the HISTORY OF GAME DESIGN WIKI and the DESIGNPORTFOLIO SERVER in the appropriate areas.
designportfolio.ca:graf1138_kinneyj:public:HistoryofGameDesign:
- Record and Edit Audio:
- Edit video and Add Signature Sound to beginning of video as video loads. Add Voice Over (V.O.) to video and sync to on-screen action.
Write SUMMARY TEXT for inclusion in the LAYAR printed publication. Follow the format: GAME NAME>RELEASE DATE>PLATFORMS>DESIGNER BIO>SEMINAL FEATURE OF THE GAME (Why it is considered important in the evolution of gaming).
Preparation for March 20 th
- EDIT VIDEO BEFORE CLASS COMMENCES
- PRINT BANNER containing team avatars (Jim, Sisley, Jenny)
- Publish Layar Magazine.
- Tie Videos to triggers.
- Tie Avatar videos to banner images.
- Test Aurasma. Geolocate and Socialize.
Preparation for March 27 th
- PRINT LAYAR magazine and test.
Preparation for February13th
Research Blogs:
Make sure you start making entries in the PEOPLE area and aggregate your research on your Designer & the Technology you are exploring (ongoing on a weekly basis). Be sure to SIGN IN otherwise postings are generic and not attributable to YOU.
Trigger Images:
1. Creation (make sure that they are SIMPLE, MONOCHROME continuous outlines using the UNITE function in the Illustrator PATHFINDER PALETTE). 2.Testing (technical specs). Use Aurasma to test your trigger image and tie it to a short video or image (We can try this in class on this week).
VIDEO ASSETS:
Video Interview:
If you are lucky enough to have a designer get back to you by this week then archive the video on the designportfolio server at:
designportfolio.ca:graf1138_kinneyj:public:HistoryofGameDesign:Video:DesignerInterviews
GameStills and Gameplay Video captures:
You need to capture Gameplay action in video and screen grabs from your game. Try for HD assets where possible. Archive to:
designportfolio.ca:graf1138_kinneyj:public:HistoryofGameDesign:Video:GameStillsandVideo
Avatar "MyFirstGame" Video:
We will shoot these videos this week. We will meet in the photography studio and shoot us introducing oursleves: "Hi. I'm ... and my first video game was..." This will be the video that links to our avatars in the credits of the layar book that we will produce.
WEAR CLOTHES SIMILAR TO THE ONES IN YOUR AVATAR!!!
Archive to: designportfolio.ca:graf1138_kinneyj:public:HistoryofGameDesign:Video:MyFirstGame
Layar Magazine Layouts: Create rough Linears for your page layout. We will print tabloid printer spreads that folds to provide a 4-panel letter-sized page that we can staple together. So, you should create a layout for 8.5x11inch. (See Maija and Victoria who will be coordinating the generation of the pages in Layar). THINK COMIC BOOK. Page elements: Game Title, Designer Name, Main Character, Text: Your written research should guide the writing but it should be done up in thought and talk bubbles so you will need to SUMMARIZE your writing and set it in a comic book style of writing SHAZAM! BAM! POW!
Please review the Concepts that were posted and place an ASTERISK * BELOW your TOP THREE PICKS for the Show name/theme (don't pick your own—use a tissue!). You can also make feedback notes beside the PDFs that you chose.
Preparation for February 6th
AR Trigger image:
Pick a main character from the game that you have chosen and render in illustrator and post in the TRIGGER IMAGES page. We have access to a vinyl cutter to create signage and transfers of our characters that can be placed around the exhibit space to trigger the AR. Keep the artwork relatively simple and abstract (this works best for object recognition). For vinyl cut we are looking at single primary colours.
Personal Avatar:
You will produce a personal avatar that we can print and use in the exhibit as interactive team credits that we could link to a short video introducing ourselves.
Link here to see examples. Use the site eightbit.me to build a basic character that you can then screen grab and import into your Photoshop file and customize.
ACTIVITIES MAP:
Below is a link to a PNG file of a Webspiration Map that details activities for the term.
KG ACTIVITIES_2013 (png)
Preparation for January 30.
Consult the Inspiration Map entitled "Got Game" below:
GOT GAME? (pdf)
Use the map to derive a name and a brand visualization for the exhibit.
Meet online in the Augmented Reality page and discuss which games you will be focusing on. Ensure that Each decade 70-s onward is represented. Choose games that are of seminal importance. Consult the timeline for guidance on structure for what you should research about your game. Consult the "History of Game Design Map" to see which games are the most influential. Evan (AKA Ryan) has posted these in red bubbles.
Research your game and its designer in detail. Find contact information (Twitter feeds, email, web URLs fan sites, etc) for the designer and send the following Letter to Designers requesting their participation.
Post your chosen game in this thread. There is also a list on this page if you're having trouble choosing a game
Preparation for February 20.
Use the basic script framework that we developed in class and write YOUR script. Time it to be within the 2 minute range.
Make sure that you have assembled your assets and that they are ready to go for editing. We will work on cutting together the video in the lab this week. Palestrina will be creating and sharing a workflow for us to follow.
Preparation for March 6.
SCRIPT:
Have it completed and ready to record on this date. Post it here.
FILENAMES:
use the convention "lastname_firstname_projectname" for all of your files.
BACK UP:
Archive all of your assets (avatar jpg/png; game stills and gameplay action video; signature sound; voice-over, avatar video; trigger images, Script (MSWORD); Finished Documentary (when done) to the designportfolio server. It is on the SECOND YEAR volume»graf1138_kinneyj»Public»HistoryofGameDesign» THERE ARE A NUMBER OF FILE FOLDERS HERE FOR YOU.
AVATAR VIDEO:
Finish editing the avatar video and post here below your avatar jpgs (make sure your name is BELOW your work) and BEWARE of accidentally deleting other folks posts! Be sure to include the signature sound found here at the beginning of your video. The WIKI editing is tricky and can sometimes end in work being deleted. Be patient and work it.
Credits
This project involved myself and twenty, third year Graphic Design students from my Knowledge Design 1 Lab at the School of Design, George Brown College. I would like to acknowledge the generous and talented assistance of Jenny Park and Sisley Leung from our Institute Without Boundaries.
I would like to thank the following students. Without their research and the application of the skills that they learned, there would be no AR exhibition.
Dyllin Aleluia, Zainab Batool, Mike Bastin, Kevin Chow, Evan Gerber, Kimberley Hall, Christopher Jetten, Andrew Kim, Victoria Kosecki, Maija Ksander, Linda Lamelas, Palestrina McCaffrey, Julian Ng, Sanjay Pinnock, Caroline Pursuk, Cassandra Savarino, Vanessa Valela
Sincerely,
Jim Kinney