10 things I loved about about the Digital Humanities & Data Visualization Master Programs at the Graduate Center

Last semester I was lucky to have the opportunity to audit a few classes at the Graduate Center. Not only was I lucky for this, but also because when I arrived at the Grad Center, this was the first year these masters were happening. And that was a great surprise, since when I first manifested my wish of coming here I had no clue about it.

When I saw the courses, I could not help myself not asking if I could audit them. I was too curious, and the professors were kind enough to let me check them. I therefore audited the Introduction to DIgital Humanities (with Prof. Matthew Gold and Prof. Steven Brier), Data Analysis and Visualization (with Prof. Lev Manovich), and Data, Place, and Society (with Prof. Kevin Fergusson).

I loved them for very different reasons but I will let the full descriptions for separate articles. For now I will just accept Buzzfeed has influenced me in the creation of listicles, and proceed to such a list of what impressed me during these classes. Here we go.

  1. The courses had a very solid introductory basis. I believe these fields might seem a bit intimidating in the beginning, due to the fact they are quite new, and that they may involve digital skills. But the courses include readings that not only put the fields on the map, but also show how are they connected to other interests and topics. I might be biased, but I do believe most of these readings should be read by anyone who is at least interacting with Digital Media.
  2. The dialogue was fantastic. I mean it. I have never been in such efervescent and also productive group discussions. I am not sure if that happened because the readings were challenging, the topics were very hot, or because of the ways the classes activities were directed towards dialogue, but I am sure I have learnt a lot from these discussions.
  3. People come from different backgrounds, which generated very interesting debates. I really liked to listen to so many perspectives. People here come from Computer Science, History, Gender Studies, Media, Game Studies and so many more, and this is how you get to break your filter bubble and get a 360 degrees approach on a topic.
  4. Practical assignments were very useful. Students had practical, and also fun projects. From doing their data visualizations from scratch, to collecting different types of data, or doing text mining on favorite topics. This way, everyone could see that those intimidating methods, terms or subjects are not actually that far from the humanist research.
  5. Students have the possibility to ask for support from the Digital Fellows, as well as to attend workshops. And that is amazing. Imagine you are attending a Master program, you are interested in one particular topic, and you have got an entire team out there to hold hands while you struggle with, let’s say, learning Python for your data analysis. Workshops expand from writing for Wikipedia, to Machine Learning, GIS, and using Zotero for a better research. Wouldn’t that be more encouraging to actually develop your project? It was for me indeed.
  6. The courses teach you how to be a “doer”. In other words, how to practically build a project, while keeping in mind ethical, professional or social debates. This is a very important aspect, I argue, since many of the tech projects or startups out there actually lack the moral compass, or a theoretical framing of what they are doing.
  7. The syllabus is continuously updated with up to date texts. This should be normal, right? But it is not always that common. It could be so easy to stick to older books, instead of keeping an eye on all the news and points of view everyday, especially in a field that changes and move so fast.
  8. The professors were very cool. I know, it is such a shallow way of expressing it in an academic environment, but I believe that is a good description for the professors who manage to build a course according to a field’s needs, and also be able to have great debates and discussions with their students.
  9. Even though for many of the people attending the classes the fields were very new, the structure of the courses allowed anyone to enhance their own professions and interests. Basically, I can say that all these three courses help you not necessarily to switch to something new, but also to build up to your background with a very fresh perspective. It does not matter if you are a linguist, a historian, or a journalist; either way, these subjects will definitely help you get a better work or research.
  10. Last, but not least, these courses allow students to expand theoretical knowledge, enhance their critical thinking, and also acquire skills that are so valuable on today’s professional market.

 


This site (thehashtags.commons.gc.cuny.edu) is not an official site of the Fulbright Program or the U.S. Department of State. The views expressed on this site are entirely mine and do not represent the views of the Fulbright Program, the U.S. Department of State, or any of its partner organizations.

Tagged , , , , ,

“The Automation Charade” and Why It Is Dangerous to Fear that Robots Will Take Our Jobs

I have always been susceptible to the so popular belief that AI mostly means robots, and these robots will take over our jobs. I have never felt threatened by robots, yet the dark side of AI is mostly portrayed by them. All the news about their evolution and capabilities left people with a scary conclusion: they will be so intelligent and skillful, and they will take our jobs. I couldn’t agree less.

This why I was so happy when I came across Astra Taylor’s article in Logic Magazine. I am not planning to analyze it or to summarize it but just briefly introduce it.

Basically, her point is that the discourse I had previously mentioned is embracing the idea of human replaceability in the work field. The more you feel threatened about losing a job because of an iron creature with a human-like look, the more you work: “But fauxtomation also has a more nefarious purpose. It reinforces the perception that work has no value if it is unpaid and acclimates us to the idea that one day we won’t be needed”. Taylor also gets into a short history of similar beliefs and discourses and criticizes it from a cultural perspective.

Nonetheless, I think this part pretty much emphasizes a lot of the automation debate and how we should understand it:

“Our general lack of curiosity about how the platforms and services we use every day really work means that we often believe the hype, giving automation more credit than it’s actually due. In the process, we fail to see—and to value—the labor of our fellow human beings. We mistake fauxtomation for the real thing, reinforcing the illusion that machines are smarter than they really are”.


This site (thehashtags.commons.gc.cuny.edu) is not an official site of the Fulbright Program or the U.S. Department of State. The views expressed on this site are entirely mine and do not represent the views of the Fulbright Program, the U.S. Department of State, or any of its partner organizations.

Short Notes from the Speed Conference by Cornell Tech

Earlier this autumn I got the chance to attend Speed, an event organized by Cornell Tech. The two-day conference focused on the dimension of speed that comes along, together with scale and complexity, when talking about algorithmic oversight. Speed is also another important aspect of the way algorithms impact our lives, even though we might not think about it that often, now we got used to getting so many things so fast. But as the event description narrows it down, “when an algorithm acts so much faster than any human can react, familiar forms of oversight become infeasible”.

Even though I could only see a few of the presentations, I am going to briefly highlight some of the ideas that I found very interesting. You can check the entire schedule and the full presentation here.

  • James Grimmelmann from Cornell Tech had a lighting presentation on moderation and memes. Going from media bias and fact checking, virality and social media trends, he discussed the concept of speed through the growing (and risky) popularity of memes. For example, he asked what makes memes take off so fast. Ambiguity might be one of the answers, a characteristic which, among velocity or controversy, defines the memes culture. A growing phenomenon which might get difficult to moderate and that does not always spread humor, but it can also spread hate speech or offensive messages. After the presentation I got this stuck in my mind: how can we train algorithms make the distinction between humor and, let’s say, extremely racist discourse? Is our technology able to notice the fine line? When is it a “fine” line?

  • Next, I listened to Kate Klonick’s preview on her latest article, “Facebook v. Sullivan”. Now the article is online and I highly recommend you to check it out, especially if you are interested in content moderation, or how equity can be reshaped by social media’s moderation practices. The article debates “Facebook’s moderation of user speech, discussing how and why exceptions for public figures and newsworthiness were carved out”. For example, I was surprised to find out how Facebook manages “bullying”, for example, in terms of content moderation. Just to put it very simply, and not in legal terms: when it comes to public figures, rules are not the same as with regular people. Meaning that Facebook protects more the people who supposedly do not have the power or tools to protect themselves from being bullied, or addressed offensive speech. It might seem legit so far; but the problem is that Facebook relies on online news sources and news aggregators to make public figure determinations. And what if, for example, a person’s name gets in the news after being the victim of a burglary or violence, let’s say, and there is a lot of content out there mentioning his name and identity?  That person is definitely a public person. But it might be considered one, and this is where the lines get blurry. Nonetheless, the article is way more complex and you should definitely check it out!

  • Jason Farman’s presentation, “A History of the Instant in Media and Message Exchange” was also related to the subject of his latest book, Delayed Response: The Art of Waiting from the Ancient to the Instant World. A book I did not get to read, but which is on my list for sure. I loved how he went back to the idea of communication between individuals and how the medium is supposed to help, and not alter their messages. But does it alter it? That’s a tough question, for sure. Nonetheless, whenever you feel like the time from getting a “seen” after you have just sent a message and the reply is too long, remember about the New York City’s pneumatic tubes that carried letters throughout the city. “Each tube could carry between 400 and 600 letters and traveled at 30-35 miles per hour”, and “It took between 15 and 20 minutes for mail to get from Herald Square to Manhattanville and East Harlem”, says this website — sorry, during the conference, I was fascinated with the photos of the tube and I forgot to take write down this kind of stats :). So was that also an instant messaging service?

  • I also checked Mike Ananny’s presentation “Public Pauses: Sociotechnical Dynamics of Temporal Whitespace in the Networked Press”. It came as a great continuation of the talk he had had one day before at Databites, at Data & Society, where he was in conversation with Tarleton Gillespie and Kate Klonick (you can watch the video recording here) At the beginning of the presentation, he reminded the audience about the BBC’s famous “There is no news” from 1930, in order to introduce the concept of media pauses. He further asked what are media pauses and why do they matter, especially nowadays, in a networked press (a concept he develops at large in his latest book, Networked Press Freedom Creating Infrastructures for a Public Right to Hear). By discussing whitespaces, and what does a journalistic pause mean in different situations, he states: “if you look for them, absences are everywhere. Whitespace is never nothing. It’s relational, and evidence of something”. And this let me thinking about how do users perceive media pauses today, in a continuous flux of information. Do people even notice media pauses?

PS: Sorry for the quality of the photos. I did not even know I was about to use them here.

 


This site (thehashtags.commons.gc.cuny.edu) is not an official site of the Fulbright Program or the U.S. Department of State. The views expressed on this site are entirely mine and do not represent the views of the Fulbright Program, the U.S. Department of State, or any of its partner organizations.

Tagged , , , , , , ,

NYCML 2018. Highlights and Insights

In September I had the chance to participate in NYC Media Lab, an event meant to gather professionals, students and researchers from New York’s media lanscape and showcase some important innovations in the field. I found it extremely helpful and inspiring, and I thought I could share some ideas.

Next, I will highlight some of the things that sparked my attention. I will start with the first keynote, which was a presentation by Thomas Reardon, Co-Founder and CEO of CTRL-Labs, on the future of neural interfaces. Which is, they argue, the concept of “intention capture”. This process is translating human intention into action through neural networks, which “decode the hidden language of biological neurons using neuron-inspired algorithms”. It was very interesting to see some video examples of how one might text on their phone without using their hands, for example — just with the help of this tool (if I might call it just a tool) which uses the neurons’ signals.

As the presentation was going on, I guess for many of us the future seemed a bit scary, not just innovative, because it looked like scenes of Science Fiction are parts of our present now. But I liked the way Reardon argued the necessity of this projects or other similar projects. Basically, from what I have understood, he said the the more control you have, the more mental energy you you have to be the “social animal” we need to be nowadays. Which I find somehow true, it might be difficult to keep up with the speed of information today. He also emphasized the switch from learning how to control a machine and using it to actually controlling the machine. You can read more about it here if you are interested.

After the keynote, there was a showcase of projects made by students from different universities within the past year, in collaboration with different companies, and supported by NYC Media Lab. The projects covered different topics, from history, to press freedom or health, and used technologies like VR or even blockchain. You can find them here. Personally, I liked the “Let’s Make History” and “Secret Club” the most.

My favorite part of the conference was the debate on Synthetic Media. This new form of media stands at the intersection of computer generated images, voice and video. The reason why it is under debate is because it became such an easy weapon for deep fakes or other forms of fake news, which are definitely a threat to democracy. The participants were Manoush Zomorodi, Stable Genius Productions; Matthew Hartman, betaworks; Karen Kornbluh, Council on Foreign Relations; Eli Pariser, Omidyar Fellow, New America Foundation; Ken Perlin, NYU Future Reality Lab. And it was interesting to see how they covered the good, the bad and the ugly of this new form of media. Among the good parts, the fact that it is a great way for artists to create new projects, and by knowing how synthetic media works, we also become more aware of how it can be used for “evil” purposes. The downside is still represented by the fact that it is quite easy to generate such media, therefore anyone can be a deep fakes creator — and this is obviously a problem. Nonetheless, I have also participated to the workshop related to synthetic media so anyone who’s interested in the subject can send me an e-mail and I’d be glad to share my notes!

The 100 demos exhibited within the event were definitely worth checking. All the projects I had managed to see were both very creative and meant to have a social impact. Good representations of text mining, natural language processing or VR / AR. You can check them here, I believe most of them also have a website!


This site (thehashtags.commons.gc.cuny.edu) is not an official site of the Fulbright Program or the U.S. Department of State. The views expressed on this site are entirely mine and do not represent the views of the Fulbright Program, the U.S. Department of State, or any of its partner organizations.

css.php
Need help with the Commons? Visit our
help page
Send us a message
Skip to toolbar