🧠 Brian Sunter Newsletter - Overview of AI Techniques for Note-Taking and Logseq Task Management
This newsletter is an overview of the latest AI and NLP (Natural Language Processing) techniques for personal note-taking and productivity, as well as a tutorial on using Logseq for task management.
I can't believe there are already 120 newsletter subscribers after just three issues! Last week I had just 60 subscribers. Thank you all for joining!
See the "rich" version of the newsletter here.
Updates ⬆️
Overview of Taking Notes with AI 🤖
I wrote a guide on using the latest AI techniques for personal knowledge management.
We are living in an exciting time for AI. Several new cutting-edge techniques exist to search by "meaning" and "concepts" instead of just simple keywords. Also, AI can now generate new text instead of just analyzing it.
I wrote an overview of the latest AI concepts, like "semantic search" and "vector embeddings." Not all of these tools are easy to use yet, but this may give you ideas for future note-taking app features.
In particular, I like this concept called "vector embeddings" and a tool called Word2Vec, which allows you to see "similar" words and concepts in 3D space. Click the link below for more examples.
A great new Logseq plugin lets you run graph analysis algorithms to discover relationships hidden between your notes.
The image below is a view of all the concepts related to "AWS VPC" (a networking concept) related to everything else in my notes, even distant relationships.
See this guide to learn how you can use the latest AI techniques for personal knowledge management.
Productivity Toolkit 🛠️
In this section, I'll share a productivity tip I've learned recently.
Logseq Task Management
Many people use Logseq primarily as a note-taking tool, but I extensively use its task management capabilities.
The tasks in Logseq determine what notes I write.
One of the most powerful ideas of Logseq is mixing your tasks throughout your pages and notes, then organizing and grouping them with queries.
Brain Food 🧠 Yann LeCun, AI Researcher
I'll share some interesting articles and "food for thought" in this section.
This week is all about Meta/Facebook's Head of AI, Yann LeCun, who pioneered many foundational AI techniques. Last week, he outlined his plan for a path toward human-level artificial intelligence and how to give machines "common sense."
LeCun believes that observing the world isn't enough for machines to become intelligent, and real progress will happen when machines can take action in the real world and learn from the consequences of their actions.
"What's missing [from AI] is a principle that would allow our machine to learn how the world works by observation and by interaction with the world. A learning predictive world model is what we're missing today, and in my opinion is the biggest obstacle to significant progress in AI.”
Link of the week 🔗
I highly recommend this article for hearing about his vision for the future of AI in this article: A bold new vision for the future of AI.
In 10 or 15 years people won’t be carrying smartphones in their pockets, but augmented-reality glasses fitted with virtual assistants that will guide humans through their day. “For those to be most useful to us, they basically have to have more or less human-level intelligence"
LeCun's area of research is how to give machines "common sense" and how to give machines human-like general intelligence.
"Common sense” is the catch-all term for this kind of intuitive reasoning. It includes a grasp of simple physics: for example, knowing that the world is three-dimensional and that objects don’t actually disappear when they go out of view. It lets us predict where a bouncing ball or a speeding bike will be in a few seconds’ time.
I think one of his most intriguing ideas is "Grounded Intelligence." He says that machines will never reach human-level intelligence by reading text alone and need much richer inputs from the real world.
There isn't a text in the world that explains mundane fundamentals, like when you hear a metallic crash in the kitchen, it probably came from a pan falling.
His research area is focused on videos because so many are on Facebook and Instagram, and the video format contains rich information about the world.
He trains machines to predict what will happen next by watching video clips. He does this using a "self-supervised" learning technique, meaning machines learn independently, without any human intervention or being taught.
A "self-supervised" training process for videos looks like this
A machine will watch half a video
Then, it will try to predict what will happen next in the video
After making a prediction, it will watch the second half of the video to see if its prediction was correct.
Then, it improves itself based on if the prediction was correct.
Google is doing the same thing for voice. Why do you think your Google Home was only $25? Google's AI is using your voice as training data, and it's listening to an audio snippet of what you say and seeing if it can predict what you'll say next.
Google Assistant - What technologies we use to train speech models
Audio samples are collected and stored on Google’s servers.
A portion of these audio samples are annotated by human reviewers.
A training algorithm learns from annotated audio data samples.
Path Towards Autonomous Machine Intelligence
This paper outlines LeCun's plan for the direction of future AI research.
He says he intends for people from a variety of backgrounds to read this paper, such as:
neuroscience, cognitive science, and philosophy, in addition to machine learning, robotics, and other fields of engineering.
I also highly recommend the Lex Friedman Podcast Interviews.
Yann LeCun Lex Friedman Podcast 1
Yann LeCun Lex Friedman Podcast 2
In this paper and interviews, one thing that stood out to me is that LeCun hints at the connection between AI and robotics: that machines will start learning fast when they are out in the real world, autonomously experiencing it, making decisions, and learning from mistakes.
Is "embodiment" needed for intelligence?
This idea reminds me of the future in Westworld, where the robots are almost indistinguishable from human beings, then "awaken" and become conscious during their interactions with humans. LeCun seems quite confident that machines will develop "emotions" as well, which he considers just another trait of intelligence. It will be interesting to see robots interacting with us and becoming increasingly human in the process.
Analytics 📈
I can't believe the newsletter has already grown to over 100 subscribers!
It's doubling almost every week, going from 10 -> 30 -> 60 -> 120 -> ??
That is already way more people than I was expecting. Knowing even a few people are looking at this motivates me to continue creating and posting high-quality notes.
Outro ➡️
I hope you enjoyed this week's newsletter.
Next week, we'll continue with more Logseq guides, like how to manage projects and groups of tasks. I'll go through my "Capture" workflow as a part of my "Logseq Second Brain" series.
I'll also get started on my data structures and algorithms guide. In future issues, we'll build up this algorithm guide in great detail. Hopefully, this will help others learn algorithms and showcase my approach to note-taking.
Check out the newsletter-roadmap to see what I have in mind for future issues. Let me know on Twitter @bsunter if you liked this issue!