• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • Insane compute wasn’t everything. Hinton helped develop the technique which allowed more data to be processed in more layers of a network without totally losing coherence. It was more of a toy before then because it capped out at how much data could be used, how many layers of a network could be trained, and I believe even that GPUs could be used efficiently for ANNs, but I could be wrong on that one.

    Either way, after Hinton’s research in ~2010-2012, problems that seemed extremely difficult to solve (e.g., classifying images and identifying objects in images) became borderline trivial and in under a decade ANNs went from being almost fringe technology that many researches saw as being a toy and useful for a few problems to basically dominating all AI research and CS funding. In almost no time, every university suddenly needed machine learning specialists on payroll, and now at about 10 years later, every year we are pumping out papers and tech that seemed many decades away… Every year… In a very broad range of problems.

    The 580 and CUDA made a big impact, but Hinton’s work was absolutely pivotal in being able to utilize that and to even make ANNs seem feasible at all, and it was an overnight thing. Research very rarely explodes this fast.

    Edit: I guess also worth clarifying, Hinton was also one of the few researching these techniques in the 80s and has continued being a force in the field, so these big leaps are the culmination of a lot of old, but also very recent work.


  • Lots of good comments here. I think there’s many reasons, but AI in general is being quite hated on. It’s sad to me - pre-GPT I literally researched how AI can be used to help people be more creative and support human workflows, but our pipelines around the AI are lacking right now. As for the hate, here’s a few perspectives:

    • Training data is questionable/debatable ethics,
    • Amateur programmers don’t build up the same “code muscle memory”,
    • It’s being treated as a sole author (generate all of this code for me) instead of like a ping-pong pair programmer,
    • The time saved writing code isn’t being used to review and test the code more carefully than it was before,
    • The AI is being used for problem solving, where it’s not ideal, as opposed to code-from-spec where it’s much better,
    • Non-Local AI is scraping your (often confidential) data,
    • Environmental impact of the use of massive remote LLMs,
    • Can be used (according to execs, anyways) to replace entry level developers,
    • Devs can have too much faith in the output because they have weak code review skills compared to their code writing skills,
    • New programmers can bypass their learning and get an unrealistic perspective of their understanding; this one is most egregious to me as a CS professor, where students and new programmers often think the final answer is what’s important and don’t see the skills they strengthen along the way to the answer.

    I like coding with local LLMs and asking occasional questions to larger ones, but the code on larger code bases (with these small, local models) is often pretty non-sensical, but improves with the right approach. Provide it documented functions, examples of a strong and consistent code style, write your test cases in advance so you can verify the outputs, use it as an extension of IDE capabilities (like generating repetitive lines) rather than replacing your problem solving.

    I think there is a lot of reasons to hate on it, but I think it’s because the reasons to use it effectively are still being figured out.

    Some of my academic colleagues still hate IDEs because tab completion, fast compilers, in-line documentation, and automated code linting (to them) means you don’t really need to know anything or follow any good practices, your editor will do it all for you, so you should just use vim or notepad. It’ll take time to adopt and adapt.


  • As someone who researched AI pre-GPT to enhance human creativity and aid in creative workflows, it’s sad for me to see the direction it’s been marketed, but not surprised. I’m personally excited by the tech because I personally see a really positive place for it where the data usage is arguably justified, but we either need to break through the current applications of it which seems more aimed at stock prices and wow-factoring the public instead of using them for what they’re best at.

    The whole exciting part of these was that it could convert unstructured inputs into natural language and structured outputs. Translation tasks (broad definition of translation), extracting key data points in unstructured data, language tasks. It’s outstanding for the NLP tasks we struggled with previously, and these tasks are highly transformative or any inputs, it purely relies on structural patterns. I think few people would argue NLP tasks are infringing on the copyright owner.

    But I can at least see how moving the direction toward (particularly with MoE approaches) using Q&A data to support generating Q&A outputs, media data to support generating media outputs, using code data to support generating code, this moves toward the territory of affecting sales and using someone’s IP to compete against them. From a technical perspective, I understand how LLMs are not really copying, but the way they are marketed and tuned seems to be more and more intended to use people’s data to compete against them, which is dubious at best.


  • Not to fully argue against your point, but I do want to push back on the citations bit. Given the way an LLM is trained, it’s not really close to equivalent to me citing papers researched for a paper. That would be more akin to asking me to cite every piece of written or verbal media I’ve ever encountered as they all contributed in some small way to way that the words were formulated here.

    Now, if specific data were injected into the prompt, or maybe if it was fine-tuned on a small subset of highly specific data, I would agree those should be cited as they are being accessed more verbatim. The whole “magic” of LLMs was that it needed to cross a threshold of data, combined with the attentional mechanism, and then the network was pretty suddenly able to maintain coherent sentences structure. It was only with loads of varied data from many different sources that this really emerged.


  • My guess was that they knew gaming was niche and were willing to invest less in this headset and more in spreading the widespread idea that “Spatial Computing” is the next paradigm for work.

    I VR a decent amount, and I really do like it a lot for watching TV and YouTube, and am toying with using it a bit for work-from-home where the shift in environment is surprisingly helpful.

    It’s just limited. Streaming apps aren’t very good, there’s no great source for 3D movies (which are great, when Bigscreen had them anyways), they’re still a bit too hot and heavy for long-term use, the game library isn’t very broad and there haven’t been many killer app games/products that distinct it from other modalities, and it’s going to need a critical amount of adoption to get used in remote meetings.

    I really do think it’s huge for given a sense of remote presence, and I’d love to research how VR presence affects remote collaboration, but there are so many factors keeping it tough to buy into.

    They did try, though, and I think they’re on the right track. Facial capture for remote presence and hybrid meetings, extending the monitors to give more privacy and flexibility to laptops, strong AR to reduce the need to take the headset off - but they’re first selling the idea, and then maybe there will be a break. I’ll admit the industry is moving much slower than I’d anticipated back in 2012 when I was starting VR research.



  • Lots of immediate hate for AI, but I’m all for local AI if they keep that direction. Small models are getting really impressive, and if they have smaller, fine-tuned, specific-purpose AI over the “general purpose” LLMs, they’d be much more efficient at their jobs. I’ve been rocking local LLMs for a while and they’ve been great as a small compliment to language processing tasks in my coding.

    Good text-to-speech, page summarization, contextual content blocking, translation, bias/sentiment detection, click bait detection, article re-titling, I’m sure there’s many great use cases. And purely speculation,but many traditional non-llm techniques might be able to included here that were overlooked because nobody cared about AI features, that could be super lightweight and still helpful.

    If it goes fully remote AI, it loses a lot of privacy cred, and positions itself really similarly to where everyone else is. From a financial perspective, bandwagoning on AI in the browser but “we won’t send your data anywhere” seems like a trendy, but potentially helpful and effective way to bring in a demographic interested in it without sacrificing principles.

    But there’s a lot of speculation in this comment. Mozilla’s done a lot for FOSS, and I get they need monetization outside of Google, but hopefully it doesn’t lead things astray too hard.


  • I get both sides of the argument here. I think we need to have this big reaction because companies have held so much power over employees for so long - I’ll avoid ranting about worker-owned cooperatives here - but the past few years I’ve surprised myself by moving into a bit of a “slippery slope” camp with these things. Not to say it shouldn’t happen, but that we need to be prepared for the follow-up.

    Hopefully related example, in education: There were some really big push backs recently where I am over bad treatment of the students in highschool, all legit. The school board ignored it for a long time, it got bad, they finally took it seriously. Then they overcorrected and stopped believing teachers at all and started jumping straight to firing at almost any complaint. Then students started weaponizing complaints, and now teachers are getting fired for trying to enforce deadlines and for giving low marks because students are complaining about how deadlines, grades, and meeting grading requirements are detrimental to mental health and well-being, and now there are a bunch of these students from this board in my university classes failing hard and filing complaints about courses being too difficult and other things despite them having glowing reviews just a few years prior.

    I guess what I’m getting at: I think it’s fair for someone to choose not to hire people like this because it’s possible that the people willing to stand up and make an important fuss over these things might not know where the line stands between a worthwhile complaint and a non-worthwhile one, and might make a company look badexternally even though it’s doing good internally, just not to someone new to the workforce’s expectations.

    I also think it’s fair to go the opposite direction, because ultimately we need major change in the way companies/everything are structured that lead to these nasty layoffs and poor conditions and if someone does raise issues where there aren’t, hopefully we are prepared enough and in the right enough to take it seriously, but weather it and act in everyone’s best interests.





  • Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it’s high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn’t at the same quality bar or style as the training code.

    On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it’s continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.




  • I appreciate the comment, and it’s a point I’ll be making this year in my courses. More than ever, students have been struggling to motivate themselves to do the work. The world’s on fire and it’s hard to intrinsically motivate to do hard things for the sake of learning, I get it. Get a degree to get a job to survive, learning is secondary. But this survival mindset means that the easiest way is the best way, and it’s going to crumble long-term.

    It’s like jumping into an MMORPG and using a bot to play the whole game. Sure you have a cap level character, but you have no idea how to play, how to build a character, and you don’t get any of the references anyone else is making.


  • This is a very output-driven perspective. Another comment put it well, but essentially when we set up our curriculum we aren’t just trying to get you to produce the one or two assignments that the AI could generate - we want you to go through the motions and internalize secondary skills. We’ve set up a four year curriculum for you, and the kinds of skills you need to practice evolve over that curriculum.

    This is exactly the perspective I’m trying to get at work my comment - if you go to school to get a certification to get a job and don’t care at all about the learning, of course it’s nonsense to “waste your time” on an assignment that ChatGPT can generate for you. But if you’re there to learn and develop a mastery, the additional skills you would have picked up by doing the hard thing - and maybe having a Chat AI support you in a productive way - is really where the learning is.

    If 5 year olds can generate a university level essay on the implications of thermodynamics on quantum processing using AI, that’s fun, but does the 5 year old even know if that’s a coherent thesis? Does it imply anything about their understanding of these fields? Are they able to connect this information to other places?

    Learning is an intrinsic task that’s been turned into a commodity. Get a degree to show you can generate that thing your future boss wants you to generate. Knowing and understanding is secondary. This is the fear of generative AI - further losing sight that we learn though friction and the final output isn’t everything. Note that this is coming from a professor that wants to mostly do away with grades, but recognizes larger systemic changes need to happen.


  • 100%, and this is really my main point. Because it should be hard and tedious, a student who doesn’t really want to learn - or doesn’t have trust in their education - will bypass those tedious bits with the AI rather than going through those tedious, auxiliary skills that you’re expected to pick up, and use the AI was a personal tutor - not a replacement for those skills.

    So often students are concerned about getting a final grade, a final result, and think that was the point, thus, “If ChatGPT can just give me the answer what was the point”, but no, there were a bunch of skills along the way that are part of the scaffolding and you’ve bypassed them through improper use of available tools. For example, in some of our programming classes we intentionally make you use worse tools early to provide a fundamental understanding of the evolution of the language ergonomics or to understand the underlying processes that power the more advanced, but easier to use, concepts. It helps you generalize later, so that you don’t just learn how to solve this problem in this programming language, but you learn how to solve the problem in a messy way that translates to many languages before you learn the powerful tools of this language. As a student, you may get upset you’re using something tedious or out of date, but as a mentor I know it’s a beneficial step in your learning career.

    Maybe it would help to teach students about learning early, and how learning works.


  • Education has a fundamental incentive problem. I want to embrace AI in my classroom. I’ve been studying ways of using AI for personalized education since I was in grade school. I wanted personalized education, the ability to learn off of any tangent I wanted, to have tools to help me discover what I don’t know so I could go learn it.

    The problem is, I’m the minority. Many of my students don’t want to be there. They want a job in the field, but don’t want to do the work. Your required course isn’t important to them, because they aren’t instructional designers who recognize that this mandatory tangent is scaffolding the next four years of their degree. They have a scholarship, and can’t afford to fail your assignment to get feedback. They have too many courses, and have to budget which courses to ignore. The university holds a duty to validate that those passing the courses met a level of standards and can reproduce their knowledge outside of a classroom environment. They have a strict timeline - every year they don’t certify their knowledge to satisfaction is a year of tuition and random other fees to pay.

    If students were going to university to learn, or going to highschool to learn, instead of being forced there by societal pressures - if they were allowed to learn at their own pace without fear of financial ruin - if they were allowed to explore the topics they love instead of the topics that are financially sound - then there would be no issue with any of these tools. But the truth is much bleaker.

    Great students are using these tools in astounding ways to learn, to grow, to explore. Other students - not bad necessarily, but ones with pressures that make education motivated purely by extrinsic factors than intrinsic - have a perfect crutch available to accidentally bypass the necessary steps of learning. Because learning can be hard, and tedious, and expensive, and if you don’t love it, you’ll take the path of least resistance.

    In game design, we talk about not giving the player the tools to optimize their fun away. I love the new wave of AI, I’ve been waiting for this level of natural language processing and generation capability for a very long time, but these are the tools for students to optimize the learning away. We need to reframe learning and education. We need to bring learning front and center instead of certification. Employers need to recognize this, universities need to recognize this, highschools and students and parents need to recognize this.


  • When I teach story points (not in an official Agile Scrum capacity, just as part of a larger course) I emphasize that the points are for conversation and consensus more than actual estimates.

    Saying this story is bigger than that one, and why, and seeing people in something like planning poker give drastically differing estimates is a great way to signal that people don’t really get the story or some major area wasn’t considered. It’s a great discussion tool. Then it also gives a really rough ballpark to help the PO reprioritize the next two sprints before planning, but I don’t think they should ever be taken too seriously (or else you probably wasted a ton of time trying to be accurate on something you’re not going to be accurate on).

    Students usually start by using task-hours as their metric, and naturally get pretty granular with tasks. This is for smaller projects - in larger ones, amortizing to just number of tasks is effectively the same as long as it’s not chewing away way more time in planning.


  • Hmm… Nothing off the top of my head right now. I checked out the Wikipedia page for Deep Learning and it’s not bad, but quite a bit of technical info and jumping around the timeline, though it does go all the way back to the 1920’s with it’s history as jumping off points. Most of what I know came from grad school and having researched creative AI around 2015-2019, and being a bit obsessed with it growing up before and during my undergrad.

    If I were to pitch some key notes, the page details lots of the cool networks that dominated in the 60’s-2000’s, but it’s worth noting that there were lots of competing models besides neural nets at the time. Then 2011, two things happened at right about the same time: The ReLU (a simple way to help preserve data through many layers, increasing complexity) which, while established in the 60’s, only swept everything for deep learning in 2011, and majorly, Nvidia’s cheap graphics cards with parallel processing and CUDA that were found to majorly boost efficiency of running networks.

    I found a few links with some cool perspectives: Nvidia post with some technical details

    Solid and simplified timeline with lots of great details

    It does exclude a few of the big popular culture events, like Watson on Jeopardy in 2011. To me it’s fascinating because Watson’s architecture was an absolute mess by today’s standards, over 100 different algorithms working in conjunction, mixing tons of techniques together to get a pretty specifically tuned question and answer machine. It took 2880 CPU cores to run, and it could win about 70% of the time at Jeopardy. Compare that to today’s GPT, which while ChatGPT requires way more massive amounts of processing power to run, have an otherwise elegant structure and I can run awfully competent ones on a $400 graphics card. I was actually in a gap year waiting to go to my undergrad to study AI and robotics during the Watson craze, so seeing it and then seeing the 2012 big bang was wild.