Avatar

no blog for me fools

@micewithknives / micewithknives.tumblr.com

Mouse, she/her, archaeology (australian) - got talked into this mess (my header is now a random mood board @thebirdhivemind did bc shes amazing)

huuuughhhhh yahoo selling scraped data from tumblr to AI sloo probably uughhhwaaauuwghhhhhh

this is what you're looking for to opt out!!!

The setting on both of my blogs was already set to "prevent third-party sharing" when I checked, so at the moment Tumblr seems to be telling the truth. The bar is on the ground, but at least they're not digging to get under it!

Let me tell you a story.

I am an archeologist. I specialize in a somewhat obscure but by no means boring or meaningless Neolithic culture in Germany.

It has a Wikipedia page. A well curated, surprisingly extensive Wiki page that encapsulates all the important information about the culture, including literature references for further research.

One day, we asked Chat GPT about this culture. W were curious which details it would get wrong.

ALL OF THEM, except for the fact that it's a culture in present day Germany.

It didn't even get the chronological time frame wrong and called it a celtic culture.

When we told it it's wrong, it came at us with made up literature sources. Literally made up. It took two well known German archeologist who weren't even active at the same time, added a year - both were already dead - and sold that as source.

And it LITERALLY would only have had to quote Wikipedia to get everything right.

THAT is how unbelievably shitty and wrong all those AIs are.

They are making shit up. They are not sourcing information, they're just slapping words together by their most like relative occurance.

Do not trust ChatGPT or any other so-called AI ever.

Whenever I think about students using AI, I think about an essay I did in high school. Now see, we were reading The Grapes of Wrath, and I just couldn't do it. I got 25 pages in and my brain refused to read any more. I hated it. And its not like I hate the classics, I loved English class and I loved reading. I had even enjoyed Of Mice and Men, which I had read for fun. For some reason though, I absolutely could NOT read The Grapes of Wrath.

And it turned out I also couldn't watch the movie. I fell asleep in class both days we were watching it.

This, of course, meant I had to cheat on my essay.

And I got an A.

The essay was to compare the book and the movie and discuss the changes and how that affected the story.

Well it turned out Sparknotes had an entire section devoted to comparing and contrasting the book and the movie. Using that, and flipping to pages mentioned in Sparknotes to read sections of the book, I was able to bullshit an A paper.

But see the thing is, that this kind of 'cheating' still takes skills, you still learn things.

I had to know how to find the information I needed, I needed to be able to comprehend what sparknotes was saying and the analysis they did, I needed to know how to USE the information I read there to write an essay, I needed to know how to make sure none of it was marked as plagerized. I had to form an opinion on the sparknotes analysis so I could express my own opinions in the essay.

Was it cheating? Yeah, I didn't read the book or watch the movie. I used Sparknotes. It was a lot less work than if I had read the book and watched the movie and done it all myself.

The thing is though, I still had to use my fucking brain. Being able to bullshit an essay like that is a skill in and of itself that is useful. I exercised important skills, and even if it wasnt the intended way I still learned.

ChatGTP and other AI do not give that experience to people, people have to do nothing and gain nothing from it.

Using AI is absolutely different from other ways students have cheated in the past, and I stand by my opinion that its making students dumber, more helpless, and less capable.

However you feel about higher education, I think its undeniable that students using chatgtp is to their detriment. And by extension a detriment to anyone they work with or anyone who has to rely on them for something.

I can remember being in computer class right before history and someone in the last ten minutes mentioned the class presentations we had next period and I was like.. fuck man I fully forgot

So I had a passing knowledge of ww2, as much as anyone, so i figured that I could bluff the context around Churchill and just get some of his details down and I'd be fine.

So I pulled his Wikipedia up and read it. Didn't have time to write a speech, this was gonna be adlib. Then I jumped on google images and pulled a picture that reflected one thing from each of his Wikipedia sections (like, early life (picture of a train set) education (Churchill graduating) early war (you get the idea).

Bunged the pictures into a powerpoint and read the Wikipedia again with the powerpoint alongside, adding subheadings to jog my memory. Pulled a couple links from the bottom of the wiki for the bibliography, opened and skimmed to make sure they weren't wild, and saved the damn thing

We were lining up outside class for history and the guys in the class are telling some classmates about how I'd just smashed out my whole presentation. I asked everyone to let me go first since the knowledge wasn't gonna last long, I was going off having just read Churchill's wiki lol

They all agreed (champions) and one of the girls said she'd read up on Churchill a bit on her presentation about the Queen, so she promised to nod or shake her head if I was completely wrong.

I presented. I know I spent a minute on each slide and spoke relevantly. I remember at one point saying Churchill excelled in school, saw my classmate was shaking her head, and pivoted to say he didn't do well with formal education but got into some of the extracurricular activities that'd benefit him come war time. She nodded. I continued lol. One of the lads complimented me on that one afterwards

I don't think I learnt much about Churchill with this study. But I absolutely learnt about public speaking. I was using skills in research and apply my contextual knowledge. I also learnt to rely on classmates, even tho we weren't friends at all she had my back because it was easy and kind and cost her nothing

I got a B+ and a comment about being one of the more engaging and charismatic presenters (that would've been the adrenaline, and my classmates were watching fascinated to see if I could pull it off lol).

The main perk of my presentation was the energy, which wouldn't've been there if I'd ai'd a script to read. And I wouldn't have this fun memory

I remember getting in a philosophy class in college (one I just took for fun), and realized that there was a paper due that day that I had 100% forgotten about writing. I lied and told the professor that I had forgotten to print it, but I had my laptop with me for note taking, so if he'd give me 5 minutes after class I would run down to the computer lab and print it off and bring it up. He said that was fine, presumably because I couldn't write a coherent paper in 5 minutes.

But I COULD write a coherent paper in 45 minutes, which is about the time it took me to slap together a dirty outline and fill it in, the way I had been taught to do in high school in my writing class. It wasn't gonna win any awards but it meant a B+ instead of a zero, and it meant I had an opportunity to work under pressure and practice skills I had learned. Skills I STILL use to this day, skills I have taught to others. Skills I use to help others edit papers. Skills I would not have and certainly wouldn't have been able to hone if chatGPT was doing it poorly instead.

That's MY B+ bullshit essay. I earned it fair and square, along with the bragging rights to having written it under my professor's nose.

For any of you with an Academia.edu account, they have now turned on AI functions by default, that will use your own work to generate more works for you to "enjoy" - a friend of mine was emailed a podcast that was auto-generated from her research, without her knowledge or consent.

You can disable it under "Account Settings." I did so and then sent a complaint via tech support. They gave me some nonsense lines about how magnanimous it is that they're allowing users to disable it if they want (it shouldn't be on by default in the first place), and told me that they never generate anything for you unless you view the settings page (which is a lie from what I can tell).

This one is definitely less surprising, but for those of you still suffering through a LinkedIn account, I was turning off the "games" email setting (why the fuck would I want LinkedIn to email me about games when I have every other email setting disabled) when I noticed this under "Data Privacy", and of course it was turned on by default:

The thing that really boils my potatoes about AI in general is that I have been a creative professional for over a decade now and the devil has ALWAYS been in the details. Big and small, I've had single-person businesses rip me to shreds over how their colors turned out on newsprint, and have worked with huge companies with THICK brand guidelines with every detail of their brand identity laid out and enforced with an iron fist.

But I guess all of that stuff doesn't matter anymore? Who gives a fuck if this AI generated baby has six fingers, that mom-and-pop shop is still going to use it. That rug from Temu says Happy Thanksgivirg? Oh well haha it's just a silly funny thing now (nevermind that you never would have given a B-grade item from a craft show the same consideration). I don't actually care that the AI Coca-cola ad has a truck that changes size every scene, but I can't help but think about how, if it had been some poor underpaid artist, they would have been laughed out of the building.

I don't really know how to put it in a succint way but it just feels all the more obvious how much more grace and flexibility has always been possible but never offered.

Y’ALL!!

THIS IS NOT GOOD!!!

Jack Dorsey funded this with his nonprofit AndOtherStuff. The home page for the organization explicitly lists AI as one of its pillars, saying a goal is "making NOSTR the best social protocol for open source AI development and implementation"

What is open source AI?

It's a program that publicly shares its code for free online, so that anyone can use it for themselves. This means that anything this AI is trained on could eventually make its way into any business, social media site, etc. That uses this code. Or, if not, it'll have the ability to harvest content as well as this.

Even since the app doesn't allow AI openly on the videos themselves, the app is likely to use our original content to train its AI. You might have seen that some AI have experienced a positive feedback loop of declining quality, training itself on other AI slop until anything it produces is unintelligible.

This is likely an attempt to prevent that in video format

Then, any other company that wants to use the code can use this better trained AI to make it even harder to recognize AI across the board.

You can read about the connection to AndOtherStuff, as well as the developers' reasons for the project here:

TLDR; do not give Jack Dorsey any credit for this, do not download the app, and tell others not to either. It's a nostalgia-bait attempt at fueling another AI model

@ GLAM professionals

Remember about 8 months ago when I made this video discussing the "Living Museum" AI app that is fraught with incorrect information and allows you to "chat" with human remains and culturally sensitive items?

Well, the creator, Jonathon Talmi, is now a speaker at the world-renowned Museum Next summit.

Gross.

I have sent an email to MuseumNext that is as follows:

Hello, 
I am emailing MuseumNext as a GLAM professional and former speaker to express my disappointment at the presence of one of your presenters, Jonathon Talmi in the 2025 Digital Summit. As you know, he is the creator of the "Living Museum" app, an AI app that scraped data from the British Museum collection search (without their knowledge or permission) and created chatbots for individual artifacts. 
Talmi has stated that the program has two functions: to create an exhibit (which it does poorly and the "exhibit" is no different than a Google search filled with incorrect and inaccruate information) and to allow the user to chat with artifacts (including contested items in the British Museums collection, culturally sensitive items, and even human remains!). I have discussed my issues with this program at length in a critical YouTube video, which you can view here.
As I stated in my response video to this project, I do believe that there is space for artificial intelligence in the GLAM sector. However, the Living Musuem is a poor example of how this can be used - it is fraught with mistakes and is an unethical nightmare of a program. 
Any GLAM professional worth their salt will condemn this, and I worry about the reputation of MuseumNext going forward if you allow this speaker at your summit. Your website boasts that: "MuseumNext has been sharing best practices and shaping how museums use digital technology for over fifteen years," but this is not best practice. Not even close. 
I am asking that you re-examine "The Living Museum" app, discuss the ethical rammifications of this program museum professional or an ethics committee, and reconsider allowing Jonathon Talmi to share this program at your summit. 
Please let me know if you have any further questions. 
Sincerely, 
Christeah

I'm so incredibly disappointed in MuseumNext for this. I had hoped to one day present with them again, but I am not sure if I want to do that anymore.

Generative AI has destroyed academia.

In the next few decades we’re going to have thousands of people who don’t really know anything, and can’t do any critical thinking.

"Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.” "

Aaaahhhhhhh🫠
“But now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.”

i think this quote sums up the entire core issue of not just cheating in uni but also a huge trend i've noticed in online spaces where people don't bother to do literally anything anymore

i'm on a few film photography subreddits and yesterday i saw someone post a photo of a camera with an extremely large front plate containing the camera's make and model and the OP was asking "hey can anyone tell me what this camera is?" like... they couldn't even read those words and type them into google to see what comes up (which would have led them directly to a wiki page on the camera) they literally went straight to reddit and wanted everyone else to do it for them

this is a further extension of the same issue, which is that the younger generation is being taught by these things to have literally zero ability to think critically and problem solve

There’s also been a massive increase in ‘Petahexplainsthejoke’ posts reaching the front page of Reddit, and it’s almost always incredibly basic context-based humour.

And for what it’s worth, I feel it’s different from the ‘Explain Like I’m Five’ subreddit. It’s a total lack of interest in wanting to sit with something and think for more than a couple of seconds.

I don't disagree with any of this, but I do just want to add my two cents of hope as someone who teaches college freshmen and is on the front lines of this issue.

Absolutely AI is a problem, and kids are relying on it when they shouldn't. I see this every day. But there are also still students who are committed to doing their own, original work and I think that headlines like this sometimes have the effect of making it seem black and white, that all students are engaging with AI this way.

My class just finished presenting the research projects they worked on over the semester, and several of them came up with really interesting topics and thoughtful interpretations! They show up and are engaged, and I think we often fail to take these examples into account when we wring our hands about AI ruining The Youth™. These are the students I focus my energy on.

I am sure that some of the papers I will grade have been written using AI to some degree, but I've found that the students doing this are the ones who were going to cheat anyway. Not doing your own schoolwork is not new. Even before AI, the internet was full of sites where you could get a paper written or plagiarize from someone else's work. Sure, generative AI changes the game, but it is ultimately a symptom of a seriously flawed education system that has been around for quite a while.

Again, I don't disagree with all of the points above. There is an increasingly alarming inability of young people to think critically and less and less resistance to taking the easy way out. But this, like any other issue, does not universally apply across the board, and I wish headlines like this took that into account.

I pretty much agree with Reid. The biggest issue I see is students using AI as an "instead of" solution rather than a "in addition to" enhancement. Also a worringly bad understanding of what plagerism is which definately exacerbates the issue but is also a problem in and of itself . . . So to sum up, yes. It is bad. Some students are completely dependent on it but they are not the majority. The majority know it can help but are not being taught how to use it to help. And as a high school teacher we are trying. But we are overworked, underpaid, and already asked to teach 3 years of material in one so . . . Dont count on it. And help us out by having these conversations with children/students in your life.

When I worked as a writing tutor before AI, a large part of my time was spent explaining to students that you can't copy-paste bits of the internet into your papers without quoting and citing it.

And the way I knew they were doing that was because they almost never changed the font and text color to match the rest of the paper.

So they were already technically cheating, and they weren't even making the effort to cheat WELL.

And you know who usually did that? Students in their first semester who didn't really know how to write university papers yet. Students who weren't particularly comfortable writing in English and were overwhelmed writing about complex topics in a new language. Students who didn't feel like they had the time and energy to produce thoughtful, high quality papers.

I had 2 girls come into my session in one day who had written their papers entirely in Cantonese and run them through google translate. If you know anything about English and Cantonese, they have wildly different grammar and sentence structures, and they were almost incomprehensible papers in English.

Their ideas, their arguments, were sound and well thought out. They just weren't familiar enough with English to say what they wanted to say. They were in a 2nd year English lit class, which their advisors had recommended as an easy breadth requirement class.

And it would have been a fun, easy class if your first language was English. For these girls, it was an uphill battle.

Maybe 2% of my 'cheating' students would have still done those things if they'd had the support and time they needed for their schoolwork. Most of them really wanted to hand in good, original work, but they didn't feel like they had what they needed to do that.

If we made university a place focused on high quality learning instead of a factory that produces graduates, we'd see a lot fewer students trying to find shortcuts to do their homework.

ur future nurse is using chapgpt to glide thru school u better take care of urself

Yep. This is terrifying. I’ve caught nursing majors, engineering majors, architecture majors relying on ChatGPT to do their homework. These are people who need to know their field well to ensure people don’t die and they’re letting a glorified algorithm cheat them through school. It’s so dangerous

hey. hi. I work in academia. and there are a lot of student-age folks on this site.

don't do this. don't use genAI. even if your professors give you permission. even if they ask for it or suggest it. if they do anything short of directly requiring it (and I weep, because I've already seen assignments that require it) don't touch that crap. if they do require it, stick it to them. be as maliciously compliant as possible. be a nightmare.

I know it might sound easier right now -- just plug in your assignment and get the answers. you don't care about this class anyway, it's not for your major, you don't see the value of the assignment.

but for your own sake, for the sake of your education and mind, and for the sake of the future world we want to have: learn the stuff. you are not as stupid as the corporate bizzaro kings who want to rule the world think you are, so don't give them reasons to believe it.

and odds are good genAI is gonna give you corrupted info anyway -- more and moreso as the machines cannibalize themselves.

just don't do it. not even "I just do it for XYZ--" no. stop. there is no valid use of generative AI, and even using it for memes or lolz feeds the system and directly feeds the pockets of the people who want to replace you anyway.

Rage reblogging this. Yesterday i got into an argument with one of my college friends who is using chatGPT to do all her work. We're psychology students. The whole group chat laughed my arguments off as if they didn't matter because "she's an artist, of course she's anti-AI" and i had to deal with it. This is a warning. If your therapist graduated in 2023/2024, ask about their opinions on chatGPT. They might lie to you if you ask "did you use it to graduate" directly, but try to make jokes about it and play it cool. If they're into it, DROP OFF. FIND A NEW ONE. Do not trust your brain to someone who didn't bother to use theirs.

Sponsored

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.