From 844eac979bae02900a405ca778a34a491f4fabfb Mon Sep 17 00:00:00 2001 From: Ziming Liu Date: Mon, 29 Apr 2024 12:44:40 -0400 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 5e6c8f88..1cebe803 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ This the github repo for the paper "KAN: Kolmogorov-Arnold Networks" [link]. Find the documentation [here](https://kindxiaoming.github.io/pykan/). -Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the [universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem), while KANs are based on [Kolmogorov-Arnold representation theorem](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold_representation_theorem). KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model accuracy and interpretability. +Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the universal approximation theorem, while KANs are based on Kolmogorov-Arnold representation theorem. KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model accuracy and interpretability. A quick intro of KANs [here](https://kindxiaoming.github.io/pykan/intro.html). mlp_kan_compare