show episodes
 
Teach, Task Box, Inspire: The Podcast is your go-to educational podcast dedicated to making your job as a special education teacher easier and more enjoyable. Your host, Lisa Hollady, is a veteran special education teacher with a passion for helping teachers like you make a real difference in the lives of your students. In your demanding roles, you’re constantly juggling various responsibilities, from differentiated instruction and Individualized Education Programs (IEPs) to data collection, ...
  continue reading
 
On the road training schools in Trust-Based Observations trainings, we periodically see absolute teaching brilliance during our 20-minute observations. It dawned on us that we have an obligation to share this brilliance with all teachers so they can learn and grow from one another. Each episode is an interview with one of these teachers where we explore their strengths as they share their tips and tricks. Tips and tricks that definitely lead to improved teaching and learning.
  continue reading
 
Artwork
 
Running out of time to catch up with new arXiv papers? We take the most impactful papers and present them as convenient podcasts. If you're a visual learner, we offer these papers in an engaging video format. Our service fills the gap between overly brief paper summaries and time-consuming full paper reads. You gain academic insights in a time-efficient, digestible format. Code behind this work: https://github.com/imelnyk/ArxivPapers Support this podcast: https://podcasters.spotify.com/pod/s ...
  continue reading
 
Artwork

1
Secondary Science Simplified™

Rebecca Joyner, High School Science Teacher

Unsubscribe
Unsubscribe
Ugentligt
 
Secondary Science Simplified is a podcast specifically for high school science teachers that will help you to engage your students AND simplify your life as a secondary science educator. Each week Rebecca, from It's Not Rocket Science, and her guests will share practical and easy-to-implement strategies for decreasing your workload so that you can stop working overtime and start focusing your energy doing what you love - actually teaching! Teaching doesn't have to be rocket science, and you' ...
  continue reading
 
The TeacherGoals Podcast is designed to equip educators with best practices & actionable strategies to achieve success and foster a more connected & empowered school community. Here we learn by engaging in honest discussions with innovative teachers, administrators and educational leaders. Our goal is to provide you with weekly inspiration that highlights what's funny, frustrating, & fantastic about teaching so that you can be your best for students!
  continue reading
 
Loading …
show series
 
I love celebrating podcast milestones with my listeners so much that I couldn’t celebrate with just one episode! Last week, I hit 150 episodes and decided to answer questions sent to me from listeners. Although I covered a variety of different topics, I received so many questions that I needed to do another episode to answer them all. So, in part 2…
  continue reading
 
This paper introduces UNSTAR, a novel unlearning method for large language models using anti-samples to efficiently and selectively reverse learned associations, enhancing privacy and model modification capabilities. https://arxiv.org/abs//2410.17050 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Po…
  continue reading
 
This paper introduces UNSTAR, a novel unlearning method for large language models using anti-samples to efficiently and selectively reverse learned associations, enhancing privacy and model modification capabilities. https://arxiv.org/abs//2410.17050 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Po…
  continue reading
 
This paper explores how Knowledge Editing algorithms can unintentionally distort model representations, leading to decreased factual recall and reasoning abilities, a phenomenon termed "representation shattering." https://arxiv.org/abs//2410.17194 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podca…
  continue reading
 
This paper explores how Knowledge Editing algorithms can unintentionally distort model representations, leading to decreased factual recall and reasoning abilities, a phenomenon termed "representation shattering." https://arxiv.org/abs//2410.17194 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podca…
  continue reading
 
The paper proposes GenRM, a hybrid approach combining RLHF and RLAIF, improving synthetic preference labels' quality and outperforming existing models in both in-distribution and out-of-distribution tasks. https://arxiv.org/abs//2410.12832 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
  continue reading
 
The paper proposes GenRM, a hybrid approach combining RLHF and RLAIF, improving synthetic preference labels' quality and outperforming existing models in both in-distribution and out-of-distribution tasks. https://arxiv.org/abs//2410.12832 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
  continue reading
 
Even though you may love the science discipline you teach, that doesn’t mean that you’re excited to teach every topic, let alone your students enjoying everything you teach. Unfortunately, there are certain topics that are considered boring but still need to be taught! Knowing the topics that are deemed boring for you, how can you turn those around…
  continue reading
 
In the latest Teach, Task Box, Inspire episode, Lisa tackles the tough topic of student work refusal, sharing hands-on strategies to turn resistance into engagement. She digs into why students might refuse to work—like feeling overwhelmed, fearing failure, or dealing with personal struggles—and stresses the need to understand the "why" behind the b…
  continue reading
 
This paper presents an AI agent for error resolution in computational notebooks, enhancing bug-fixing capabilities while evaluating user experience and collaboration within the JetBrains Datalore service. https://arxiv.org/abs//2410.14393 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: http…
  continue reading
 
This paper presents an AI agent for error resolution in computational notebooks, enhancing bug-fixing capabilities while evaluating user experience and collaboration within the JetBrains Datalore service. https://arxiv.org/abs//2410.14393 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: http…
  continue reading
 
This study explores "dark matter" in sparse autoencoders, revealing that much unexplained variance can be predicted and proposing methods to reduce nonlinear error in model activations. https://arxiv.org/abs//2410.14670 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.…
  continue reading
 
This study explores "dark matter" in sparse autoencoders, revealing that much unexplained variance can be predicted and proposing methods to reduce nonlinear error in model activations. https://arxiv.org/abs//2410.14670 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.…
  continue reading
 
Toni Rose is joined by Sara Bennis to talk about how she uses and adapts MCP as the principal of a rural elementary school. Show Notes The Energy Bus, by Jon Gordon Connect with Sara by email at sbennis@maquoketaschools.org, and check out her district and its Socials at [maquoketaschools.org](maquoketaschools.org) Learning Experiences for the Upcom…
  continue reading
 
The paper analyzes scaling laws in machine learning, providing best practices for estimating model performance using a large dataset of pretrained models and emphasizing the importance of intermediate training checkpoints. https://arxiv.org/abs//2410.11840 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Ap…
  continue reading
 
The paper analyzes scaling laws in machine learning, providing best practices for estimating model performance using a large dataset of pretrained models and emphasizing the importance of intermediate training checkpoints. https://arxiv.org/abs//2410.11840 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Ap…
  continue reading
 
This paper presents a novel method for image inversion and editing using rectified flow models, achieving superior performance in zero-shot tasks compared to existing diffusion model approaches. https://arxiv.org/abs//2410.10792 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcas…
  continue reading
 
This paper presents a novel method for image inversion and editing using rectified flow models, achieving superior performance in zero-shot tasks compared to existing diffusion model approaches. https://arxiv.org/abs//2410.10792 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcas…
  continue reading
 
The paper explores whether large language models (LLMs) can introspect, finding that finetuned models can predict their own behavior, suggesting a form of internal knowledge access. https://arxiv.org/abs//2410.13787 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/…
  continue reading
 
The paper explores whether large language models (LLMs) can introspect, finding that finetuned models can predict their own behavior, suggesting a form of internal knowledge access. https://arxiv.org/abs//2410.13787 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/…
  continue reading
 
The paper proposes a method to enhance LLMs' thinking abilities for better instruction following, improving performance across various tasks without additional human data through iterative search and optimization. https://arxiv.org/abs//2410.10630 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podca…
  continue reading
 
The paper proposes a method to enhance LLMs' thinking abilities for better instruction following, improving performance across various tasks without additional human data through iterative search and optimization. https://arxiv.org/abs//2410.10630 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podca…
  continue reading
 
The paper investigates extreme-token phenomena in transformer-based LLMs, revealing mechanisms behind attention sinks and proposing strategies to mitigate their impact during pretraining. https://arxiv.org/abs//2410.13835 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.appl…
  continue reading
 
The paper investigates extreme-token phenomena in transformer-based LLMs, revealing mechanisms behind attention sinks and proposing strategies to mitigate their impact during pretraining. https://arxiv.org/abs//2410.13835 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.appl…
  continue reading
 
https://arxiv.org/abs//2410.13720 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2410.13720 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2410.12557 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2410.12557 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2410.04343 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2410.04343 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
This study explores redundancy in Transformer architectures, revealing that many attention layers can be pruned with minimal performance loss, enhancing efficiency for large language models. https://arxiv.org/abs//2406.15786 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.a…
  continue reading
 
This study explores redundancy in Transformer architectures, revealing that many attention layers can be pruned with minimal performance loss, enhancing efficiency for large language models. https://arxiv.org/abs//2406.15786 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.a…
  continue reading
 
The paper investigates how large language models represent numbers, revealing they use digit-wise circular representations, which explains their frequent errors in numerical reasoning tasks. https://arxiv.org/abs//2410.11781 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.a…
  continue reading
 
The paper investigates how large language models represent numbers, revealing they use digit-wise circular representations, which explains their frequent errors in numerical reasoning tasks. https://arxiv.org/abs//2410.11781 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.a…
  continue reading
 
In this episode, Lisa dives into practical strategies for teaching flexibility and reducing rigid thinking in autistic students. Since many kids with autism find comfort in routines, Lisa highlights the need to balance maintain structure and create room for growth. She shares insights on how understanding the roots of rigidity, modeling flexible be…
  continue reading
 
There are many elements of teaching that are unpredictable and out of your control, which can make it difficult to handle or prepare for. One of those things are student absences. Let’s be honest, it’s a miracle when all of your students are in class on the same day! So, you need to prepare and determine how you’re going to deal with the absences o…
  continue reading
 
This paper explores using large language models to generate code transformations through a chain-of-thought approach, demonstrating improved precision and adaptability compared to direct code rewriting methods. https://arxiv.org/abs//2410.08806 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts…
  continue reading
 
This paper explores using large language models to generate code transformations through a chain-of-thought approach, demonstrating improved precision and adaptability compared to direct code rewriting methods. https://arxiv.org/abs//2410.08806 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts…
  continue reading
 
The paper evaluates unlearning techniques in Large Language Models, revealing that current methods inadequately remove sensitive information, allowing attackers to recover significant pre-unlearning accuracy. https://arxiv.org/abs//2410.08827 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: …
  continue reading
 
The paper evaluates unlearning techniques in Large Language Models, revealing that current methods inadequately remove sensitive information, allowing attackers to recover significant pre-unlearning accuracy. https://arxiv.org/abs//2410.08827 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: …
  continue reading
 
Toni Rose is joined by Sabrina Monserate-Lindsay for a deep dive into her approach to using the MCP model in her high school math class. Show Notes World Jigsaw Puzzle Championship 2024 Connect with Sabrina on Instagram @puzzle_bites Learning Experiences for the Upcoming Week We’re attending MassCUE in Foxborough, MA from Oct. 15-17. We have four o…
  continue reading
 
MLE-bench is a benchmark for evaluating AI agents in machine learning engineering, featuring 75 Kaggle competitions and establishing human baselines, with open-source code for future research. https://arxiv.org/abs//2410.07095 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts…
  continue reading
 
MLE-bench is a benchmark for evaluating AI agents in machine learning engineering, featuring 75 Kaggle competitions and establishing human baselines, with open-source code for future research. https://arxiv.org/abs//2410.07095 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts…
  continue reading
 
https://arxiv.org/abs//2410.07073 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2410.07073 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
DIFF Transformer enhances attention to relevant context while reducing noise, improving performance in language modeling, long-context tasks, and in-context learning, making it a promising architecture for large language models. https://arxiv.org/abs//2410.05258 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_pap…
  continue reading
 
DIFF Transformer enhances attention to relevant context while reducing noise, improving performance in language modeling, long-context tasks, and in-context learning, making it a promising architecture for large language models. https://arxiv.org/abs//2410.05258 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_pap…
  continue reading
 
This study introduces GSM-Symbolic, a benchmark revealing LLMs' inconsistent mathematical reasoning, highlighting performance drops with altered questions and increased complexity, questioning their genuine logical reasoning abilities. https://arxiv.org/abs//2410.05229 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@ar…
  continue reading
 
This study introduces GSM-Symbolic, a benchmark revealing LLMs' inconsistent mathematical reasoning, highlighting performance drops with altered questions and increased complexity, questioning their genuine logical reasoning abilities. https://arxiv.org/abs//2410.05229 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@ar…
  continue reading
 
Switch Sparse Autoencoders efficiently scale feature extraction in neural networks by routing activations through smaller expert models, improving reconstruction and sparsity while reducing computational costs. https://arxiv.org/abs//2410.08201 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts…
  continue reading
 
Loading …

Hurtig referencevejledning