Many of us find ourselves developing PowerPoint presentations, creating teaching & learning materials, and writing content for our blogs. Typically we grab photos from Google images, and then paste the images into their intended destination. And we do this without concern about copyright, nor sourcing the image. What? That can’t be. We generally source written content and teach our students to do the same, but we’re much more lackadaisical when it comes to photos.
Technology is helping us to find and use open source images for all types of use. This means you can use the images without worrying about copyright infringement so long as you follow the attribution guidelines. Below are some sources that you might consider using in the future.
When you insert those photos, be sure to describe what is going on. This supports UDL Principle for Representation: 1.2 Offer alternatives for for visual information
Check out this poster created by the Center on Technology and Disability regarding alt text requirements for 5 different image types including informative, functional, decorative, complex, and image of text.
The following two sources were found at Ontario Extend’s Curator module for Visual Interest
The post Free photos – it’s way easier than you think to find and use them appeared first on Professor Danny Smith.
My 4 year old son struggles with bathroom activities but his reading skills are exceptional! (What can I say, I’m a proud dada.) We’ve committed to reading with my son daily, but we also have also relied on the iPad to supplement our efforts. To that end, we attribute his success in reading in part to the Teach a Monster to Read app. Yes, it is acceptable for a 4 year old to play and learn on the iPad in our house. Sometimes it’s with a parent and sometime alone. It is important to me that he learn how to use digital tools but more importantly, how to manage their use. This is why he’ll pick up his Pokemon books rather than the iPad.
Using both traditional and new reading practices would fall within Universal Design for Learning (UDL) principle: Engagement – Recruiting interest: 7.1 Optimize individual choice and autonomy
@OntarioExtend asks what does Digital Literacy for Teaching mean?
Because I have invested time over the past few years learning and implementing ed tech into my daily life, I feel confident using technology in the classroom in meaningful ways. What’s key for me is that tech doesn’t get in the way of learning, rather it supports it.
I generally focus on a few tools per semester or year so that I don’t get overwhelmed, so here is my shortlist:
The post What he lacks in pooping, my son makes up in reading – A nod to digital literacy appeared first on Professor Danny Smith.
I was on a ed tech webinar this afternoon where the presenter just talked and talked. There were no questions, interaction, nor asking attendees what they’d like to learn or why they were participating. This translated into NO me. I bailed within 10 minutes despite being very interested in the educational tool that was being discussed. And even though this was more of a business type webinar, UDL principles can apply.
C’mon folks, the time of talking AT someone is over. Instead, lets engage our online audiences. We have to remember that we are teaching people, not subjects (ref. unknown). This is even more important online.
If you’re promoting a product, service, or just getting a team together to chat online, consider using the features that come with the video presentation software. Doing so will help to engage your audience and make the content more meaningful. Remember, you’re competing with other screens and priorities for attention. Make your online meeting matter!
I turned my attention to some personal online learning and easily completed 4 x 15 minutes self-paced modules because the content was engaging, interesting, and fun. The course used quiet design complimented by:
I like to showcase videos of award-winning marketing cases during class and in turn, ask the students to review, unpack, and build upon the campaigns. To my chagrin, I’m generally greeted by blank stares and faces after videos, especially if I start asking questions about the content. Guided by UDL principles, I went looking for a solution this week to help students make sense of video content.
The Cornell approach is well know to those who have had formal teaching training.
For the obviously obvious statement, WordPress is built on a database. The question is, besides data like visitor counts, what can you infer from the data in the posts and metadata itself?
The question was swirling in preparation for a research interview I did today with David Porter and Valerie Lopes for the Ontario Extend project.
My cartoon lightbulb went on over my head, thinking about that we have a blog syndication hub set up, and because of the way Feed WordPress does it’s thing, it means a copy of all posts is saved locally on the site.
I already have it display for any list of the blogs, like all of them, a count of the number of blogs subscribed too as well as the total number posts syndicated in:
This is done as well for each of the cohorts, since posts for each are assigned to a designated category, like the blog list for all in the West Cohort.
The lightbulb as that quite sometime ago, I actually built a plugin for exporting data from posts in a category, I have my own tool– wp-posts2csv. The plugin allows you to choose the category to pull data from (or just for all posts)
And a button to click. It returns a .csv file to download.
The thing I never was quite sure (insert disclaimer here of not being a data scientist) what is useful in having here, in spreadsheet format:
Here is a peek at the data (showing for two of my posts, I give myself permission to use my data about my blog in a blog post on my blog).
I had designed this first, and used yesterday, for syndication hubs, but it would work fine on any WordPress site. By “work fine” I mine it will spit out some spreadsheet stuff.
But really, what can one infer from this? Is there meaning in looking at word/character count? Use of tags? Use of links?
I dunno (remember the disclaimer)?
I did the due diligence of some googling, where first you have to find out how to filter out all the SEO seeking and marketing stuff, the best searches I found were for
content analysis of blog posts but that seemed dusty too. Studies that referred to old horses like “technorati” and done in the mid 2000 to late 2000s.
A few focussed on comment data, which is something we do not get when syndicating posts (long story, it’s really messy).
I’d like to think my search skills were weak here, so I go Lazy Web and ask for help. What can you do with this kind of data? What else is worth getting to do activity/content analysis? Does anyone really know what time it is?
A long standing curiosity is that DS106 has been syndicating in content from thousands of blogs for tens of different classes since 2011. The tanks deep inside the database have copies of 79,000+ syndicated blog posts.
How many research projects have taken on looking at that data?
Near as I know… zip.
I guess there’s no interesting data there.
It’s way late in the day before a long weekend, but just to jog some notes down to pick up next week.
Besides spawning an Ontario Extend Daily Extend challenge for July I have more plans for the month.
Participants in the first cohort training were set up with web domains with Reclaim Hosting sponsored by the project. It was a lot of things covered in the face to face sessions, and we just barely got into the essentials of domain management, written up as a guide.
As anyone who has first glanced at the cpanel dashboard, the array of icons is overwhelming.
I’m planning on a four week series of activities, exercises to provide a group of people an experience together, and a place to gather around. Reclaim Hosting supports a large number of educators with their domains, and while they have a community area, with great responses in questions asked, and a lot of documentation, my hunch is most people miss the space.
I had suggested, and they put in place a “Newbies Corner” where I have been putting out a first few queries of interest in participating in “domain camp”. Here we can “extend” Ontario extend to reach a larger community. So a bit of camp activity, discussion, announcement will take place here.
It’s also in my thinking to create a series of activities for campers within the Activity Bank used so far for exercises in response to the Ontario Extend modules. This works well for sharing, as people will be publishing new sites and content pn their own domain.
But it was also reading Tim Clarke’s great ideas shared as a response in the Reclaim Hosting Community that got me thinking a big part of this could be people like Tim, who have expertise, to add activity ideas to the bank. That’s how the bank was designed.
So I am outlining a basic set of topics I will introduce each week, starting July 9:
A week in camp might include:
These are just some notes transferred from my sketch pad to the blog. I’m thinking that the camp experience would definitely be aimed at people doing their first experience with a domain, but also designed for people who have had their domain for a while but would like to dive in a bit deeper.
And this will be a wide open experience for anyone with a new or existing domain to join, definitely anyone who has gotten a domain through Ontario Extend. We do have some more accounts available we can share with participants who have progressed through or are making steady progress though the Ontario Extend Modules.
So if you have been using hosted blogs for your Ontario Extend work and wish to explore what more you can do when you reclaim that to a plot of internet land of your own, please let me know.
For now, Domain Camp is a graphic and these ideas, I’d certainly like to hear your ideas on how to help people learn their domain landscape. Please comment here or in the Reclaim Hosting Community thread where I started the idea rolling.
Who’s ready for camp?
He talks tough, but he just wants you to be creative and join a challenge to do all 31 Daily Extends in July.
— Alan Levine ? (@cogdog) June 22, 2018
Don’t let him scare you.
He just wants you and maybe a few more people to join in and and extend your technical, creative skills every day in July. Just follow @ontarioextend on twitter, where each day the new challenge shall be tweeted, or check in each day to https://extend-daily.ecampusontario.ca. Read the daily each day, put on as much extending as your creativity can summon, and provide a response in twitter.
On July 1 we will reset the leaderboard (if that’s something that motivates you).
What do I get for doing this? Nothing! Well, nothing tangible, no prizes or badges. Bragging rights? But you will get a chance to try some new tech tools, explore resources, and extend your abilities. You get to do this along side others.
What happens if I miss a day? Nothing! You can go back at any time and do ones we missed. We are not monitoring when you did a daily.
We see a good flow of interest and activity in Ontario Extend with the dailies. Many already do it every day. So more than just going 31 for 31, perhaps you might aim not to do what is quickest, but maybe get even more creative with how you respond to a challenge.
You have full latitude to do a daily anyway you chose. You can disobey the orders as long as you do something interesting!
Sergeant Hulka Extend Master Class wants to know… do you have what it takes to do all 31 daily extends in July? They won't be easy!
— OntarioExtend (@ontarioextend) June 25, 2018
And yes we will be upping up the complexity on a few of these, it is a challenge after all.
For what it’s worth, this is something I have run a few times for the DS106 Daily Create as a means to rally participating over the summer, when maybe classes were not happening. The tough sergeant first appeared in 2013 and we did have a request to have hime back there in 2018.
Sorry Tina, but Sergeant Hulka is off in Bali this year for a mindfulness yoga retreat
— Alan Levine ? (@cogdog) May 18, 2018
Hulka is back now for July. We’ve even dusted off the reference to the old Charles Atlas sand kicked in the face comic.
Don't let the creative bullies on the beach push you around! Get tougher and more creative with the July @ontarioextend Daily Extend 31 day challenge.
Look for more from Extend Sergeant Hulka by end of the week pic.twitter.com/sMDtmsyeme
— Alan Levine ? (@cogdog) June 27, 2018
But don’t take the tough talk and pushing around bother you, Hulka is really a sweet guy. Inside.
Join in and see how much energy we can get around the Daily Extends in July.
A core of the ethos from my years participating and teaching ds106 is the importance of not only creating media, but sharing the behind the scenes “how”.
Typically the idea of doing a DS106 Daily Create or an Daily Digital Extend is to do it quickly. One aspect I always enjoy about the daily _____ concept is the various ways people find to achieve a response.
So a bit of unmasking here, just for the sake of it.
The Ontario Daily Extend #oext215 Redo a Public Domain Cutup was one I have used before in other sites.
The @PDCutup account is a bit that every four hours tweets a mashup of public domain images from two institutions in New York City- one from the New York Public Library and one from the Metropolitan Museum of Art
— Public Domain Cut-Up (@PDCutup) June 19, 2018
Each tweet includes links to both source images. Your mission, show you choose to extend it, is to use the same images in some other way together to make a new cutup.
I just love the idea of @PDcutup and not only because it pulls from collection of open licensed art. The concept behind it is brilliant, it’s all dne by a bot created by Matt Miller, read more at Public Domain Cut-up Bot, inspired by New York City billboards where often you can see through rips and tears in the top image to see parts of the previous board.
The result is a Twitter bot that slices through the layers of public domain images creating new confusing and often interesting combinations. The bot works by pairing up two works starting with the metadata released by NYPL and MET. It looks through all the titles and finds one from each that have the smallest levenshtein distance or the fewest changes needed to make the two titles the same.
The bot then layers the two images together and takes a digital X-ACTO knife to it, cutting an irregular polygon into it with only a pseudorandom number generator guiding its hand.
It also has another trick it sometimes employees. It can find the most common color in the upper layer and slice out those pixels exposing the underdrawing which has a much more “all over” glitchy effect. The result is a new image, an aggregation of two works that often span decades and cut across artistic styles and mediums.
It’s rather amazing how the bot finds similar images from 2 different art collections. And I love how it’s tweets includes the links to the source image.
So the challenge for this Daily was to create a different kind of remix image from the same two ones used in an @PDCutup tweet.
From the people who tried this Daily, I’m not sure what Irene used to create her image, which was two images with a “golden” theme in the title
— Irene Stewart (@IrenequStewart) June 26, 2018
Lynn’s remix was of two nautical scenes, but she shared a free. web based remix tool, that as new to me
@ontarioextend #oext215 Remixing @PDCutup from June 26: Portland Head Light from Cushing's Island https://t.co/IA2gHEhTEw and MET: Bear Island Light https://t.co/raiIgZwWyC using Freemix https://t.co/56BrF2OC70 pic.twitter.com/Hv7yXr5J4j
— Lynn Cartan (@LynnCartan) June 26, 2018
Greg created his with Keynote on an iPad
#oext215 @ontarioextend Remix June 18
NYPL: Plymouth Rock and Harbor from Cole's..https://t.co/unzpSmkjaQ
MET: Bay and Harbor from near Fort Castle..https://t.co/GdKMwVYURj pic.twitter.com/h2AKu1mjCf
(Used Keynote on iPad instant alpha to mix in MET photo) https://t.co/blPMsEppkV pic.twitter.com/bwX7na6l3D
— Greg Rodrigo (@greg_rodrigo) June 26, 2018
Steven used pixlr, one of my favorite web based graphic tools
— Steven Secord (@stevensecord) June 27, 2018
but later was inspired by Lynn to redo it using the Freemix tool she recommended
— Steven Secord (@stevensecord) June 27, 2018
This is a lovely circle of creative effort, where I’d like to think people are driven by interest and creativity, and not just checking an assignment box. And in a tweet, people are not just posting their media, but sharing their tools and giving attribution to the sources. Well done, Extenders!
I’m publishing the dailies but also trying to be in the mix too, so here was my contribution
— Alan Levine ? (@cogdog) June 26, 2018
I liked the similarity in themes of dark ballrooms from the two original images:
Grottos, Caverns, Grunewald Hotel, New Orleans, La. public domain image from the New York Public Library
My go to here is Photoshop, software I am still learning after first trying it in 1993 ;-) But my remix game definitely went up notches a few years ago when I started using alpha masks in my Photoshop editing.
It’s power is you can composite images without destroying the originals. You use a “mask” layer, where what is painted white in the mask you can see, and what is black as masked. So you can fine tune edges, selections by painting in the mask layer.
I started by loading the Grotto image in Photoshop and just trying to think what kind of way I might use it with the darker image of the dancers. The idea that entered was making that front table like a prime spot to see the dancers, so I used the magnetic lasso selection tool to get close to selecting around the table, and tops of chairs. When using masks, the selection need not be perfect, as you can adjust later in the mask layer.
Selecting the front table
I added to the selection the white border. With that selection active, I went to the Grotto image in another window, did a quick copy, and returned. Here is where I use a nifty tool, under Edit, Edit Special I select Paste Outside. What this does is create a mask of everything I selected, so I get an image where the table is still visible, but behind it you see the dancers.
Using Paste Special to put the dancers image outside of my selection
The image I pasted in now has an alpha layer mask (the black and white to the right)
If I select the alpha mask, I can zoom in and paint with a small brush more black so I can mask the dancer layer and see through to the Grotto on, like the gaps in the chairs, and slop around the edges of my original selection. When I zoom in I can see how well they sit together:
The I can even command-click the alpha layer to load it as a selection, go to a new layer, and fill it with a color (just to demo this)- this shows how I can put stuff behind the table:
How the mask provides a selection that appears behind the table
Discard that pink layer!
Just to make it look less pasted together, I return to the dancers layer, and drop the opacity to maybe 80% so you can see through it slightly, and here is my final composite:
This all took maybe 20 minutes, and the blog post to try and explain it 3 times as long!
Masks and adjustment layers and such sound like mumbo jumbo, but if you want to create photo realistic mashups, they are rather essential in my book for my graphic work. I use them all the time.
That’s the behind the scenes look at my Daily Extend.
Featured Image: Unmask Me (Me) Sketchport image by Don’t Speak Silent licensed under a Creative Commons CC-BY license. This may be the first time using an image from this site, but they should be commended for not only explicitly licensing images but also giving credit to the person who made it, unlike other scammy image scraper sites.
Call me a web old fogey, I don’t care. I cannot blog enough about the value of using an RSS Reader for tracking a set of blogs, web site sources that are important to your work.
I just added this as a new Ontario Extend Activity Bank item — This Indispensable Digital Research Tool, We can Say, Without Lying, Saves Time as an activity for the Curator Module.
Admittedly it’s a rewrite of a blog post form a year ago The Indispensable Digital Research Tool I can Say, Without Lying, Saves Time (I consistently misspell “indispensable” but like using it), but for the activity, changing the instructions to important OPML feeds to first experience how using the Feed Reader to track activity from the 50+ Ontario Extend blogs is better than remembering to scan the posts on the syndication hub. Or relying on twitter.
This activity asks you to set up an RSS Reader (it could be any one, but my examples use Feedly) and import one of the OPML files that represent either all of the syndicated blogs or ones from a cohort. For this activity, it’s asking people to just experience how the reading of 20, 30, 50 blogs in one interface is a rather efficient way to keep tabs on many sites.
This is especially useful for teachers who are considering having their students do some of their work in their own blogs. I could not imagine teaching this way and relying on my memory to ready 25, 30, blogs, and keep track of what I have read. With some extra dexterity I have also set up sets of feeds for the comments on my students blogs. Being able to scan at a glance the level of activity on student blogs and comments is a huge aid in following all the activity.
I also used this in my seminar last year for MA Thesis students; I had them set up a feed reader to read each other’s posts easier, but also encouraging them to create their own feeds for their research area.
In followup activities to be written, participants will create sets of feeds for their own interest areas and learn how to use some of the internal curation tools (e.g. sending favorites to “boards”) can be used for more curator approaches.
Maybe talking on and on about this makes me sounds like some mad web scientist, but I remain assured of the value of browsing the web via a Feed Reader.
It’s never and either/or with getting information from the social streams, they both feed me in different ways.
But do us a favor and check out the new Ontario Extend Activity– This Indispensable Digital Research Tool, We can Say, Without Lying, Saves Time.
Featured Image: “Combining both the rake and hoe. It is a Perfect Weeder. Specially adapted to cutting weeds and grass, shallow cultivation, and stirring of the soil of all garden crops and flower beds. It is neatly and strongly made, the blade of the best spring steel, sharpened on both edges.”
I’m a big proponent of using images with my online writing, all of my blog posts start with an image before I even write.
But sometimes you do not have access to upload images, but with a little bit of know how you can sometimes insert them. This happened just today to Danny, a participant in Ontario Extend who had posted his response to the Collaborative Dining Activity.
The editor does have an insert image button but it does not allow uploads directly to the site:
What this button asks for is a “source” …
What is the source?
… where the source it wants is a URL to an image that exists elsewhere in the internet. That’s maybe not well understood.
This sometimes means image hotlinking, not always the best approach it you are using a link to some other web site. This is not always kosher, because the image may someday be removed, but also, it might mean you are putting a demand on someone else’s web server.
You can find a raft of free services to upload images and then get pubic URLs for them. But this still makes you reliant on a third party.
But it’s legit to do if it’s your image and you put into a place you manage. Here are a few options. You may know more.
I upload all my photos to flickr, let’s use as an example this resilient dog I know.
Since I have my own domain, I have the ability to just upload it to my site using file transfer tools. I keep a directory on my cogdogblog.com domain for such “stuff”, so I can put Felix’s photo there and use this URL anywhere a site expects an image
Copy that url to a new browser window, and load it. Hello Felix!
Not everyone has their own domain, but if you do have a blog, even a free/hosted one, you have a place to store media. Since I have a blog, well several, (and hopefully you do) I can upload my photos upload it to my blog. In a WordPress dashboard (the black menu interface), I can go to Media — Add New
Once it’s been uploaded, way over on the right is an “Edit” button
Once an image is uploaded to your WordPress site, look for the Edit link
And from here we can find the image’s File URL
Getting the URL for an image uploaded to your wordpress site
And after all that, my next example. I can use this anywhere on the web, like the Extend Bank’s Image Source.
Sometimes in a pinch I might just be editing a post, like this one, or just start a new one that I later discard, but still use Add Media button, all just to add an image to my library.
I actually just skip inserting the image, but once uploaded, again, on the right side, I can get another image URL
So any place I have a blog, I have a place I can generate a URL for an image, here from my self-hosted WordPress blog.
I cold also use a WordPress.com blog, and just upload an image to my library, so I casn return later to grab an image I might want to use later by it’s URL
And look! Another place I have my own managed, own place for the same image’s URL I can use
Goggle blogs provides this, maybe a tad uglier, but still..
Same photo, I might put in my Dropbox, I can create a public link:
Hey, same dog, same photo, but a different URL. But one I own.
Maybe I am polluting the web with Felix’s photos, but this is now a permalink “a permanent static hyperlink to a particular web page or entry in a blog”. One I control, not someone else.
I can decide to make it permanent forever, or remove it. It’s mine.
All of this is maybe way too long an explanation for Danny’s response, which initially had no image. But Danny did have a picture. And there is another way, which is asking me for help, so I added his image.
Either way, using images us important in communicating online, and getting a bit more savvy about how to use image URLs might come in hand.
Twitter recently enabled features to add descriptions to uploaded images, a huge boon for accessibility. That’s a feature, how do you make it a habit?
Well, by doing it. I’ve been trying most of the time.
Alt text provides additional information to the HTML created when images are embedded in web pages. The primary purpose is for increasing accessibility for visually impaired people traveling the web; screen reader software will use it to describe images they cannot see.
This video is from 2010 but imagine experiencing twitter like this
I just played a bit with the Mac OSX Voiceover utility to read just 2 of my tweets. Enjoy the experience:
Twitter did add options for alt description tags in tweeted images, but it hardly got use or attention until a widely shared tweet from @_Red_Long
I’m a blind twitter user. There are a lot of us out there. Increase your ability to reach us and help us interact with your pictures, it’s really simple and makes a huge difference to our twitter experiance allowing us to see your images our way. Thanks for the description ? pic.twitter.com/hCsjoFdmev
— Rob Long (@_Red_Long) January 3, 2018
Still, to do this you actually have to turn on a featured buried in your twitter settings; one might wonder why it’s not a default to be on?
In fact I wonder if twitter knows how
many few people have bothered to enable the setting.
I’ve been using it the last 2 months within the Echofon app on my iphone trying to be consistent, though I am far from getting it done 100% of the time. Maybe 85%? It is a problem because most of my tweeting is done in Tweetdeck and I do not see this feature enabled.
Here’s one example:
— Alan Levine ? (@cogdog) June 20, 2018
In Echofon there’s an editable field over the image, so it’s really not much effort to do:
Adding image descriptions via Echofon
I don’t see anything in the browser (because I can see the photo), but there in the HTML I confirm the alt text is present:
And in using the web interface, likewise, after uploading an image, I get an Add Description editable field.
Adding an image description in the twitter web interface
That’s the functional part of this — this is a buried option. Accessibility is 15th in a list of twitter settings. Last. Dead last.
Does the order of settings indicate something about twitter’s stance on accessibility?
Although Twitter’s support page is titled How to make images accessible for people all it tells you is how to turn the feature on. When the feature was announced in 2016, the marketing people honed this vacuous statement
We’re excited to empower our customers and publishers to make images on Twitter accessible to the widest possible audience, so everyone can be included in the conversation and experience the biggest moments together.
So excited that the accessibility setting is 15th in the list? That it is not enabled by default? That hardly sounds like “including everyone in the conversation”.
But more than that, there is the craft of writing this. And this is something I have been trying to hone. What is the most important things to get into an image description so someone who cannot see the image can read enough to understand?
I found a bit more help from Webaim on ALternative Text, with mention of considering the image as content vs function, it’s context, with many examples discussing the ways one might write image descriptions.
Pretty much elsewhere you find helpful suggestions like “Describe the image as specifically as possible”. Gee I never would have thought of that. And it should not start with or include “image of” or “picture of”. Or that it should be “meaningful”.
It is worth noting that the description should be both complete but also as short as possible.
The more links I read on googling “How to Write Good Alt Text” the more dismayed I got in how the bulk of the results were about the mechanics of doing it.
I’m sure there are some better guides out there.
But my approach is just to keep trying my own ways to write image descriptions and aim to get better just by the act of doing.
Frankly, this exercise underscored how accessibility is really a 15th item priority for most people. Maybe you can uprank it yourself.
While nervously planning my first major event at the Maricopa Community Colleges as a young instructional technologist my stomach was a mess. My mentor, Vice Chancellor Alfredo de los Santos, assured me that all I needed to do was to get people in the room and something worthwhile to do.
Like many of Alfredo’s sayings, that one has held true.
The idea is just to set up a time to meet, and talk about whatever comes to mind of people who show up. How often to we get such unstructured time? And today worked well with our group of @stevensecord @IrenequStewart @ProfessorDannyS and @NurseKillam.
Danny has this idea about to unleash on us…
Some loose notes follow, assisted by other items in the chat log.
Several folks still mentioned some struggle with the Extend Modules, especially the Scholar one (see the video conversation I had with David Porter on the Scholar Module) and also the Technologist one. I urged them to interpret the activities in a way that makes sense and meaning to them (e.g. if their faculty role was different that having a class now, there are other places/approaches for doing SoTL research).
We quickly went into some news shared by Danny about Microsoft purchasing Flipgrid, and many of the features that were fee-based would now be free. It’s a platform for posting video question and response discussions, with many features for setting up (e.g. limit to 90 second responses).
See more at https://flipgrid.com/:
Flipgrid is where your students go to share ideas and learn together. It’s where students amplify and feel amplified. It’s video the way students use video. Short. Authentic. And fun! That’s why it’s the leading video discussion platform used by tens of millions of PreK to PhD educators, students, and families in 150 countries.
Irene noted that for accessibility it’s a great feature that automatic close captioning is built in, and as Danny reported, transcripts can be downloaded. Laura asked in chat if it can be used with an LMS, and I believe there was a “yes”.
Several said they use it as a way for student introductions. And it has been used for a previous Daily Extend (authored by Steve). But what other ways might it be used? It was suggested for in-class students to record concepts, teach each other. From the FlipGrid website:
Don’t just ask a question. Expand their world and ignite a discussion! Foster previous experiences, share a booktalk, discuss current projects and events, delve deep into STEAM ideas, or collaborate on anything. If you believe it’s valuable for students to verbalize their learning … that’s a Flipgrid Topic!
You are 100% in control. You can moderate videos, provide custom feedback, set the privacy rules, and much more.
We had a bit of round robin of talk about, for productivity software, whether people are “Microsoft People” or “Google People” (or “best tool at the moment” people).
There was a question about content on YouTube being taken down, and what that might mean for individuals (probably little) via this tweet from Grant Potter
"Several popular YouTube accounts including those belonging to ‘MIT OpenCourseWare‘ and the ‘Blender Foundation,’ have suddenly had all their videos blocked." https://t.co/a4Z9DE70MU
— Grant Potter (@grantpotter) June 21, 2018
The latest update to this story suggests the issue has been cleared up, but we recognize that Google/YouTube gets to call the shots (and also, hosting video servers is no fun at all). I was able to watch this beautiful animation (and be very drawn into it) by the Blender Foundation, one of the organizations thought to be affected.
We talked a bit about domains of ones own, and how that was something available to Extend Participants. Laura shared some info about how she uses her domain nursekillam.com.
I’m planning on running a four week mini “Domain Camp” starting July 9 as a way for people new to domains (or experienced) learn more about how things work and what is possible).
Then, inspired by this tweet stream by Jesse Strommel, was a lively discussion on plagiarism, the role (and problems) of tools like TurnItIn.
— Jesse Stommel (@Jessifer) June 20, 2018
And in what I really like seeing happening in this kind fo gathering, Danny asked for some feedback on dealing with an issue of grading participation in group projects. Not quite of relevance, I shared how in one of my DS106 courses where students did group radio show projects, one group cleverly did a show, “The Science of Group Projects” about dysfunctional group projects! I found a link (maybe for my own nostalgia sake)
Danny noted that he tracks what people do in groups by having students communicate in WhatsApp groups. There was some interest in asking Danny to do a demo for us next week.
Yes, the hour went quick, but was really engaging open discussion. I’m putting out a challenge next week for our lunch crew to recruit a colleague to join in, we have room at the table. And also this suggestion:
We needed a 2-hour lunch lol … maybe next week we could talk about blogging … to separate the aspects of the self or not? It felt weird to blog about family and teaching things in the same place.
— Laura Killam (@NurseKillam) June 21, 2018
The lunch table will be open next Thursday at 12:30pm EST, pull up a chair and join us.
Featured Image: Pizza photos shared to twitter by @thomcochrane – a friend and colleague from Auckland New Zealand, who inspires me with his home cooked pizzas (I’ve sampled them in person). As far as licenses for tweeted images… well who know what? I’m fairly sure Thom would be okay with this.
The practice of archiving or reclaiming the “stuff” one does on the web is mostly framed on the preservation of our stuff, after sharing it into some site one does not own nor manage, chosen because it was easiest.
Easy has a price.
The thing that Occam shaved with suggested the simplest is better, not the most convenient.
To archive something to make sure it’s never lost, is important, but I have a different angle on shaving with your own Occam brand razor. Developing a process of managing all the stuff you create before it goes into the cloud, is not only about saving it from destruction, but more so, to make it as useful to yourself as possible.
The cloud should never be the primary place to store / manage what you create; it should always be the exhaust.
Doing 500 at a batch, this would take me only 122 downloads.
Fortunately I do not have to worry, because I organize my photos *before* sending to flickr.
The cloud should be the exhaust of your content, not the primary storage. https://t.co/AkfCGCOsQO
— Alan Levine ? (@cogdog) April 30, 2018
My thinking here is informed of a process / strategy I have been using for managing my photos since honing it in 2009. I edit, organize, title, write captions, tag all my photos on my computer at home, using a photo management tool to organize them, store them on local hard drives (multiple copies, stored in different locations). The cloud is where things go after that.
I maintain all my photo data in a copy of Aperture I run on my computer, with source files stored on external drives (and synced to copies). When I edit my photos in Aperture, I also write the titles, descriptions, add tags, dates, locations, even add a license statement into the image metadata… so my definitive archive is one I manage, not flickr. When I export to flickr, the FlickrExport for Aperture transfers all the data to flickr, and even writes back to Aperture the URL to the flickr page it created (read more details on my Aperture strategy).
There’s an organizational scheme I learned in a photo workshop with Bill Frakes and Don Henderson. In Aperture, and how the photos are stored in my drives is a high level directory structure:
Within each folder, I have projects or folders organized by year, and within those by separate events, e.g.
Even if I have to look for photos on my hard drive, this gives me some sense of where I will find them. There are many different ways people create an organizing structure, this is one that made sense to me over time.
I currently have 61,580 photos shared into flickr, putting them there since 2004. And yes, every year it seems there is panic that flickr will fold. But I don’t have to worry about downloading or exporting my photos because I already have them.
I was thinking about this a few clicks ago, reading Aaron Davis’ Managing Content Through Canonical Links. His post was spawned from a twitter conversation with the visual art prolific Amy Burvall.
Amy, wondering if you store all your images centrally anywhere? I checked https://t.co/kP22W5rY0X and it seems to be a bit outdated
— Aaron Davis ?? (@mrkrndvs) March 14, 2018
Argh. Yeah that is the bane of my existence. All are on Instagram from past 2-3 years but that’s unsearchable unless I’ve added a unique tag. If you have any suggestions I’d love them. We’re yiu looking for a particular one I could send?
— ?????????? (@amyburvall) March 14, 2018
In a conversation on Twitter discussing the archiving images and canonical URLs, Amy Burvall explained that much of her work was simply stored on Instagram, which can be problematic.
And it’s understandable why Amy shares into Instagram – she travels a lot and can share and post things from her mobile. It’s quick and takes very little effort.
Friends like Bryan Alexander are fans of the way Google automatically uploads and organizes, even identifies your photos. It’s interesting, but then you have chosen to make their cloud your primary source. I’m not a fan of that.
But, as Amy notes, she knows it’s a problem. As place where photos are stored, she has 12,745 stored in Instagram, it’s virtually impossible to find your own photos that are older than the recent 15 or so. Doesn’t it seem strange that the most basic information retrieval function, search, that powers maybe the biggest entity on the internet is completely absent in Instagram?
Once you find yourself endless scrolling to find your own photos, that quick to post convenience factor evaporates.
We can hypothesize that they don’t offer this because they want you scrolling past ads, not leaving the platform. Or that they decided not to offer search because it’s computationally intensive? And don’t think a clever google search will help.
All you will get is her stream. Instagram blocks google from indexing anywhere beyond the front of someone’s profile.
So you might try to use tags, but you still end up scrolling through your own things.
Flickr, on the other hand, as old and crippled and antiquated as people make it out to be, offers me full keyword search of my own photos.
It’s not a case of “flickr being better” — I am a regular user of Instagram. Amy’s process is something like:
Her original photos I can guess are stored on her iPhone. And we know how easy it is to find photos stored on an iPhone. Scroll. Scroll. Scroll. Scroll. Scroll. Scroll. Scroll.
My process is:
My original photos are stored on my own hard drives, and organized in Aperture. I can search for them in flickr or in Aperture.
Now many people may not even care nor need to find their older photos. Mine is not the only way to use social media. Friends have told me of teens who regularly delete all but their most recent 20 photos. There is nothing wrong with using social media as a place for things that are disposable.
But my media, and not just my photos, are the thing much of my work is built on. I use a similar organizing structure for my clients, my presentations, my video projects- organize by year, inside there organize by project. Inside there I typically create directories for images, audio, video. I keep all source media I have downloaded, and floating in the top is a text file named
credits.txt that includes the name, URL, and license for anything I have downloaded.
Maybe I am not always that neatly organized, but I keep track of all the things I download and use in a project, here is a screenshot of a folder for a presentation last year.
Random files sitting on the desktop or in a downloads folder serve no purpose later. But If I remember creating that GIF of the Jetsons for the talk in South Carolina, I can usually find my way into the media folder.
Oh here it is, along with the source media, the Photoshop file I made the GIF from. It’s like a project within a project.
My system is far from perfect, and I do lose things have places where stuff is strewn about. And it takes more time to organize your media locally than just shooting it into the cloud. But it has paid off more than enough times since I started a system. For anything I make, a web site, a video on YouTube, and audio on SoundCloud, there is a folder of media and sources somewhere on my computer.
There is some inertia to get started. If you have 12000 unorganized photos, maybe just leave them there and start anew. I have a huge box of CDs of digital photos I took before I began organizing in 2009. It’s my own running joke that “one day I will import those photos and organize them.”
If you organize first at home, you never really have to “reclaim” your stuff because you already have it. And I often think of stuff as being “co-claimed” where it really is not important for me to have it all self hosted.
Aaron Davis, Greg McVerry, Chris (not Clark ;-) Aldrich and others have been working hard at IndieWeb ideas of POSSE (Publish Own Site Syndicate Elsewhere) via microblogs and web mentions, where they publish everything on their own site and push publish to twitter, etc. It’s interesting stuff and harkens back to the day when some of us thought trackbacks were love.
But that’s the publishing flow. I’m advocating even before getting to that part, that, if they stuff you make and create has potential future value to you, to figure out your own strategy for organizing / archiving at home. All the sharing should be exhaust (not exhausting).
You’ve gone to some trouble to create stuff, then to share it– it seems worth it to make it useful to yourself in the future.
How do you organize your stuff?
I don’t know about you but the word “scholar” in terms of my teaching comes across with a heap of connotations, many of which have me thinking “I’m not a scholar.”
The Ontario Extend Scholar Module aims to break this down into an approach that is less about “being a scholar” into one aimed at what most of want to be doing- teach better:
This module examines how you can use your classroom and your courses as a research lab to explore how you might improve your teaching practice and positively affect student outcomes and their satisfaction with the overall learning experience in your course. It invites you to consider research about teaching and learning within your discipline and provides a process to implement a research plan.
This kind of action research is often called the “scholarship of teaching and learning” (SoTL), and it involves an awareness and appreciation of effective, research-based, discipline-appropriate pedagogical approaches for examining your own practice.
This module is listed at the end of the other five, and in many ways, is a capstone to the series. But are people put off or left wondering how they take this on?
For the cohorts we are brining along over the summer, we have asked module authors and people experience in the program to write a welcome post for each module, what we have been organizing as Ideas from Lead Extenders.
Previously, David Porter, CEO of eCampus Ontario and also author of the Scholar module, wrote about Reflective scholarship is a portal to improved practice.
I had an idea to instead of asking David to write again, to have a short video conversation about the module. In out 11 minute chat, we talked about David’s scenario, how a change of teaching modality forced him to do some research into his approach, and suggestions for people to take on a foray into the Scholarship of teaching and learning.
The design of the Scholar Module breaks it down into a series of scaffolding activities that should make it less of a big undertaking. I also like how the activities are built around creating a series of planning documents shared in a place where you can find the work of others.
So are you ready to get your scholar hat on? Head this way…
Featured Image: Single frame of Conversation With David Porter on the Scholar Module YouTube video shared under a Creative Commons CC BY Attribution license