Why Language Models Hallucinate
Even as language models become more capable, one challenge remains stubbornly hard to fully solve: hallucinations. By this we mean instances where a model confidently generates an answer that isn’t true.
Digital dividends: tokenized real estate
Over the last eight years, since the first tokenized real estate deals were completed, tokenization has helped open potential new avenues for real estate investment through fractional ownership.
AI Training Data Explained
In the exciting world of artificial intelligence (AI), one of the most important concepts to understand is "training data." But what does this term really mean? Why is it so crucial for AI? In this article, let’s break it down into simple terms that everyone can understand, whether you’re well versed or someone with no experience in AI.
The 30% Rule in AI
AI can speed up work but cannot match human empathy or nuanced thinking. The 30% Rule keeps humans as key decision-makers while AI handles routine tasks. This approach safeguards jobs by blending automation with human strengths. It creates a balanced system where technology and people work together.
How AI Works - Practical Insights for Residential Real Estate
Practical insights about generative AI, brokerage enablement, and the leadership philosophy behind RETEQ’s offerings.
Parental Controls in ChatGPT
New tools and resources to support families, and notifications to keep teens safe.
Why You Can’t Always Reproduce the Same AI Result.
In the world of artificial intelligence (AI), we often hear impressive stories about how machines can learn from data and make decisions. Whether it's recommending your next favorite movie or helping doctors diagnose illnesses, AI holds incredible potential. However, one intriguing aspect of AI is its unpredictability. You might ask, "Why can't we always get the same result from an AI?" This question leads us to explore the fascinating world of AI algorithms, data, and randomness.
ChatGPT vs. Claude: how people really use AI.
Some conclusions are, specifically that ChatGPT – used by 700 million people weekly by the end of July – is being engaged more for personal tasks, including writing and research, while Claude is being leaned on for work-related jobs.
A New Challenger to LinkedIn!?
OpenAI has announced it is developing an AI-centered jobs platform as part of broader efforts to expand AI literacy, and as the company grows its consumer and business-facing AI applications.
The ChatGPT maker’s “OpenAI Jobs Platform” will utilize AI to help connect qualified job candidates to companies, which could put it in competition with Microsoft’s LinkedIn.
Large Action Models (LAMs)
While LLMs are great for understanding and producing unstructured content, LAMs are designed to bridge the gap by turning language into structured, executable actions.
Be careful! Otter AI secretly records private work conversations - per Class Action Lawsuit.
A federal lawsuit seeking class-action status accuses Otter.ai of "deceptively and surreptitiously" recording private conversations that the tech company uses to train its popular transcription service without permission from the people using it.
RAG AI Reduces Manual Tasks for RE Agents
Retrieval-Augmented Generation (RAG) AI systems are transforming residential real estate
brokerages by automating or streamlining routine tasks. This gives agents more time to focus
on what matters most-building relationships and delivering top-tier service to clients.
US AI Strategy Unveiled
President Donald Trump unveiled a sweeping new plan for America’s “global dominance” in artificial intelligence, proposing to cut back environmental regulations to speed up the construction of AI supercomputers while promoting the sale of U.S.-made AI technologies at home and abroad.
AI Generalization Bias in LLMs
Recently a study was published by Royal Society Open Science titled ‘Generalization bias in large language model summarization of scientific research’, found HERE. The study illuminated the ability of AI to increase public science literacy and support science research overall. Their findings did in fact reflect that, however it’s noted that AI summaries omit details that limit the scope of the scientific findings. This led us here at RETEQ to wonder, is there risk in the corporate world to overusing AI to summarize information, creating overgeneralization in details material to decision making, thereby leading to unnecessary risk in those decision making scenarios?
Your Brain on ChatGPT
Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.
Abstract from Cornell University study. This study explores the neural and behavioral consequences of LLM-assisted essay writing.
Prompt Library
200 expert-crafted prompts designed to drive results across Sales, Customer Success, Marketing, RevOps, and Leadership.

