Week 1: SmartMotion Vision & Road Ahead #49
FluxxField
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hey everyone — it’s been 1 week since launching SmartMotion.nvim, and I just wanted to take a moment to share the long-term vision and open up discussion.
Thanks to everyone who upvoted, starred, commented, or reported bugs — the feedback has been incredibly helpful.
What is SmartMotion really trying to solve?
There are already some great motion plugins out there: flash.nvim, leap.nvim, hop.nvim — they all bring something useful. But one thing they all share is that they’re opinionated and tightly coupled. You get their motions, their way. Want to modify it? You’re out of luck.
SmartMotion is not a motion plugin. It’s a motion framework.
The goal isn’t to compete feature-for-feature with flash or hop — the goal is to let you build your own motion systems from reusable parts.
What is a composable motion?
A composable motion is one that’s built from simple, interchangeable pieces:
Each module is pluggable. You can mix and match to build any motion behavior you want.
There’s also a merging utility that lets you combine multiple filters, actions, or modifiers into one. Want to filter for visible words AND after the cursor? Merge both filters. Want to jump and yank? Merge both actions.
Why is this powerful?
Because you can:
It turns motions into recipes.
For example:
A motion like
s
that jumps to a word after the cursor using labels:A motion like
dt
that deletes until a character (but shows labels):A motion that surrounds the selected target:
These are built entirely from modular parts. No custom code needed.
You can also create hot shot motions by skipping the visualizer entirely — these will automatically apply the action to the first matching target. This is perfect for cases where you don’t need to choose and just want speed.
Cutting down on mappings with inference
Right now, most motion plugins require you to map every behavior to a separate key:
dw
,yw
,cw
, etc. But with SmartMotion, the goal is to map fewer keys and let the framework infer the rest.For example:
d
to SmartMotiond
is mapped to thedelete
actionw
)w
maps to thewords
extractorSo, hitting
dw
gives SmartMotion all it needs:delete
action fromd
words
extractor fromw
It then composes the rest from configured defaults (like filters, visualizers, etc) to execute a composable motion.
This will allow you to:
d
,y
,c
, etc. as entrypointsFlow State & Target History
SmartMotion also introduces the concept of Flow State:
j
) disables labels and falls back to native movement — best of both worldsThere’s also a planned Target History system, which allows for two types of repeating motions:
This opens the door to complex workflows like smart repeat, repeat-last-target, or even undoing and reapplying targets with different actions.
Integrating with other plugins
The biggest opportunity is for other plugins to build their motions using SmartMotion instead of reimplementing everything.
Imagine:
If your plugin exposes a list of targets, you can register:
And gain full access to:
All without rewriting a motion system from scratch.
I want your feedback
I’d love to hear:
Thanks again to everyone who’s tried SmartMotion so far — this is just the beginning, and I’m excited to see where it goes next.
Let’s build something powerful together.
— Keenan (FluxxField)
Beta Was this translation helpful? Give feedback.
All reactions