Hello. Is FANN suitable for classifying documents? For example, the AI should recognize which category the document belongs to, name it, and find the document date. If so, how would I start? Thanks a lot, Harald
I would like to ask for help with the following matter. I am unable to install fann library on Windows 11 in Visio Studio Code. In Studio Code it compiles very well but in Visio Studio Code I am only capable of calling cmake . When I try to run the example cpp code compiler is unable to find ".c" files. I can point where are exactly ".h' files but no linking takes place during compilation. I have to use VS Code because only this program has got MQL extension.
of resolved the issue replacing \r\n to \n in the netw file
update, I just compiled the same version on windows and linux, but it still works on windows and not on linux throwing FANN Error 3: Wrong version of configuration file, aborting read of configuration file "fann.netw". Why?
I trained a network file with version 2.2.0 in windows, but when I want that network on linux wich is version 2.2.0+ds-6 n throws FANN Error 3: Wrong version of configuration file, aborting read of configuration file "fann.netw". Is there a simple way to convert the file to use in linux?
Currently now I under linux and receive this error: symbol lookup error: /home/alexey/Downloads/newfftw/bin/Debug/libfftw3l.so.3.6.9: undefined symbol: fftwl_solvtab_rdft_r2cb
Hello. I have recently found that the file libfanndouble is needed maybe for linux version, I use codeblocks on windows and this file is needed by the program, I train the neural network but when I feed it with data I get garbage output that is not connected with the activation function, the data is very high or very low values. Has somebody tackled the same problem?
you should recompile entire library in amd64
Hey, I've just started out playing around with FANN, but I've run into some issues early on. I want to integrate FANN into an existing workflow that is built on x64. I have gotten FANN to run by itself on x86 before, but as soon as I switch to x86 I get a linker error for any fann function i use: "LNK2001 unresolved external symbol__imp_fann_create_from_file", for example. When switching to x86 Visual Studio complies the .exe to another folder, I did add the fannfloat.dll to that folder, but it doesn't...
Hello, I have exactly the same problem as yours. I searched absolutely everywhere on http://leenissen.dk/fann/wp/ but i didn't find anything about it. SInce then have you succeeded to do it with fann or have you used another library ? Or if someone have a idea don't hesitate it will really help me and many others I think. Thanks
Does anyone here have source code for a 7-inuput, 14-layer deep hidden layer neural network that reads in Windows text files? If anyone has C++ or Python, very much appreciated....
when i create tcl-fann and run make install as like "gcc -shared -pipe -O2 -fomit-frame-pointer -Wall -Wno-implicit-int -fPIC -Wl,--export-dynamic -o libfann1.0.so tclfann.o convert.o -lfann -L/usr/lib -ltclstub8.5 " get below error ld: skipping incompatible /usr/lib/libtclstub8.5.a when searching for -ltclstub8.5 ld: skipping incompatible /usr/lib/libc.so when searching for -lc collect2: error: ld returned 1 exit status make: *** [libfann1.0.so] Error 1
Hello FANN's out there, I just want to let you all know that I have developed an alternative to FANN; Introducing TFCNN it is a fully connected neural networking library in C with a small footprint, and as such, it can be included in your project via a single header file. TFCNNv1 targets any platform that compiles C code, it features binary classification, and a staple set of 5 activation functions, 5 optimisers, and 3 uniform weight initialisation methods. A CPU based Uint8 version is additionally...
Hello FANN's out there, I just want to let you all know that I have developed an alternative to FANN; Introducing TFCNN it is a fully connected neural networking library in C with a small footprint, and as such, it can be included in your project via a single header file. TFCNNv1 targets any platform that compiles C code, it features binary classification, and a staple set of 5 activation functions, 5 optimisers, and 3 weight initialisation methods. A CPU based Uint8 version is additionally available...
Hello FANN's out there, I just want to let you all know that I have developed an alternative to FANN; Introducing TFCNN it is a fully connected neural networking library in C with a small footprint, and as such, it can be included in your project via a single header file. TFCNNv1 targets any platform that compiles C code, it features binary classification, and a staple set of 5 activation functions, 5 optimisers, and 3 weight initialisation methods. A CPU based Uint8 version is additionally available...
The most recent reference to using FANN on GPUs is the thread on https://sourceforge.net/p/fann/discussion/323465/thread/c6b3c726/ which ends in questions. I was wondering if there were any movements toward using FANN with CUDA or OpenCL ... it seems a little odd that a library as practical and useful as FANN is not supporting GPUs ... at least that's my impression.
np
Thank you.
yes of course, but this is what u need
1. We have a dataset with 4 parameter (for example - sepal length, sepal width, petal length and petal width). This is publicly available dataset. our neural network is trained on this datset with 4 parameters. Now I want another parameter to be used along with the 4 parameters mentioned above, but that parameter is not present in the available dataset. Now can I use this same trained neural network with additional 2 parameters so total no of 6 parameters. 2. is there any type of neural network which...
Hi Thank you. By adding the extra parameter to the dataset available, will it not tamper the dataset?
Hi Thabk you. By adding the extra parameter to the dataset available, will it not tamper the dataset?
1) you can, just expand old dataset with additional parameters in/out 2) you can try, but possibly this will not work (or will work but on very very long train)
1. We have a dataset with 4 parameter (for example - sepal length, sepal width, petal length and petal width). This is publicly available dataset. our neural network is trained on this datset with 4 parameters. Now I want another parameter to be used along with the 4 parameters mentioned above, but that parameter is not present in the available dataset. Now can I use this same trained neural network with additional 2 parameters so totalnod 6 parameters. 2. is there any tyoe of neural network which...
first of all, AI (fann/other) is just a NS name for neural networks. they can't learn game, they can ADOPT to games in repeated battles (if u construct some points system. be aware: agent way is error.yes), if u make million builds&runs (use met) and trying logic blocks mutating system, but u always should create logic rules, and this what comes me in mind can be mined like bitcoin s on gpu. what a idea.
sorry i need to kill u now
first of all, AI (fann/other) is just a NS name for neural networks. they can't learn game, they can ADOPT to games in repeated battles, if u make million builds&runs (use met) and trying logic blocks mutating system, but u always should create logic rules, and this what comes me in mind can be mined like bitcoin s on gpu. what a idea.
Hi, I am currently attempting to construct a generative adversarial network setup in C using FANN, but have run into the issue of bridging the gradient between my generator and discriminator networks. What I would like to do is get the activation gradient for the input layer of the discriminator network after training it on an example produced by the generator, so that I can use discriminator's input gradient as the desired output of the generator network to then train it. FANN does not seem to have...
There once was an implementation of Self Organizing Maps (http://leenissen.dk/fann/html_latest/files2/som_gng-txt.html) which according to an old forum threads suggest it got lost sometime on the past in a fork of the code. I was wondering if anyone knew if there is any development of implementing SOMs in the current FANN code base?
I'm building a rather complex turn-based strategy game with multiple "worlds" of 3D hex maps, a complex economy, diplomacy, warfare, etc. I was wondering if this kind of game would be too complex to practically train the AI players using something like FANN? I originally had a set of genes that you could set manually (or randomly) to define the AI behavior, with the possibility of training AI players against each other with an evolutionary meta-function that would run hundreds or thousands of games...
Moderation posting await.
Post awaiting moderation
Post awaiting moderation.
"that takes the type"
"that takes the type"
Post awaiting moderation.
Post awaiting moderation.
Post awaiting moderation.
Post awaiting moderation.
\rPost awaiting moderation.
Post awaiting moderation.
Hello from Russia! Almost completed the program, trained, saved the file net.Save ("skynet.ann"); , but I can’t understand how to download the training results back to NeuralNet. net = new NeuralNet (NetworkType.LAYER, num_layers, num_input, num_neurons_hidden, num_output); net.CreateFromFile ("skynet.ann"); // Error CS1061 'NeuralNet "does not contain a definition for" CreateFromFile "and could not find an extension method" CreateFromFile "that takes the type" NeuralNet "as the first argument (possibly...
I have attached a jpg showing a schmatic of what I have in mind. The custom output neuron acepts the values for M1 and M2 from the input layer and the value of R from the output of hidden layer 1. The custom output neuron then uses the equation, E = R M1 / (M1 +M2) to produce the output of the network, E. In my thinking, this optimizes the predicted value of R in the context of the relationship between R, M1 and M2, which i s known. In order to test this, I need to do the following, construct a network...
I have attached a jpg showing a schmatic of what I have in mind. The custom output neuron acepts the values for M1 and M2 from the input layer and the value of R from the output of hidden layer 1. The custom output neuron then uses the equation, E = R M1 / (M1 +M2) to produce the output of the network, E. In my thinking, this optimizes the predicted value of R in the context of the relationship between R, M1 and M2, which i s known. In order to test this, I need to do the following, construct a network...
Hello, I have an ANN application built on the fann library that I use to create ANN models of chemical properties. I typically use rprop with a sigmoid transfer function and my own division of the data. I am currently modeling a chemical property where the molecular weight is part of the property value. For this reason, mw needs to be part of the model input. In this case, mw is very highly correlated with some other necessary inputs. This is a case where two inputs are numerically similar but affect...
It seems there is a certain inefficiency in the use of weights in current neural networks. You are trying to come up with n linear classifiers all operating on one vector, if n is 200 or something you are going to find it very difficult to find that number of worthwhile different linear splits. Finding that number of nonlinear splits would seem more plausable. But then you have to use random projections https://discourse.processing.org/t/flaw-in-current-neural-networks/11512
or what
the Axbench benchmark uses fann library. Do these links provide newer versions for fann or what?
google 'nvidia ai'/'google ai'
Could you please provide a more detailed explanation about this? Thanks
looks like fann is abandoned and new technologies from nvidia/google should be tryed
Hi everyone... any solutions to this problem? No success with installing the latest fann from github
sorry i don't see any .fann file.
try version as stated in .fann file [at top]
Hi.. I have recieved the fann library from github as the latest version is available there but the problem has not been solved. How can I fix it?
bottom line: I compiled and linked to doublefann.c directly (included it in the project and #define FANN_NO_DLL), and it works fine. So the "bug" is with the VS2010 project settings that create the DLLs. Case closed but you may want to revisit the VS2010 build flags for double precision
PS. this is not idle bug reporting. On (most?) windows x64 systems, working with "float" numbers could be 3 times slower than working with "double" precision, especially for 32 bit applications. So if the double version of FANN is buggy, then all the professed "speed advantage" is gone!
I am building the sample FANN projects (latest version 2.2) with visual studio 2010. When using floats as fann_type it works ok, but if I switch to doubles (include doublefann.h instead of fann.h) there are access violations in large networks. I discovered you can also see the bug with small samples like XOR_TRAIN included in the examples folder. Build it as it is (using floats) and mark the output. Then switch to doubles (#include doublefann.h and link to fanndouble.lib) and see the output is different...
I am building the sample FANN projects with visual studio 2010. When using floats as fann_type it works ok, but if I switch to doubles (include doublefann.h instead of fann.h) there are access violations in large networks. I discovered you can also see the bug with small samples like XOR_TRAIN included in the examples folder. Build it as it is (using floats) and mark the output. Then switch to doubles (#include doublefann.h and link to fanndouble.lib) and see the output is different (some time it...
I have wondered the same thing. I'm not sure if there's a "best" approach, but deep networks have shown good results by creating one big network.
Apparently, re-compiling all the files should solve part of this error, now I can at least train the network.
I'm using the jpeg network from AxBench benchmarks suite. It's worth to mention that other networks have failed as well, with the same error.
paste your network file at pastebin
I've downloaded the latest version from GitHub: https://github.com/libfann/fann. And no success.
try to open this network using latest FANN version
Hi, I'm having the same error. Did someone found a solution? Apparently there's no solution on the web.
Hi, I'm having the same error. Did someone found a solution?
I've been playing around with this on and off for weeks and I'm feeling like this isn't possible on embedded systems I just want to train and develop the network on pc and implement a realtime network on an embedded ARM system with the training data compiled into it. the problem is having no filesystem. I've tried rewriting the Fann_create_from_file_fd to acccept a header containing a giant string containing the XOR_data.net saved fann data but That's not really feasible since a network with a hundred...
I've been playing around with this on and off for weeks and I'm feeling like this isn't possible on embedded systems I just want to train and develop the network on pc and implement a realtime network on an embedded system with the training data compiled into it. the problem is having no filesystem. I've tried rewriting the Fann_create_from_file_fd to acccept a header containing a giant string containing the XOR_data.net saved fann data but That's not really feasible since a network with a hundred...
Hi there, I'm a pretty big novice at this stuff but excited to learn. I'm trying to adapt the provided source files to work on a ARM0 C processor on KeilUvision but getting lost at what needs to be changed in the libraries to get it working. I'm assuming I'm going to train the data on a pc and then transfer traning data to the device for compiling. My confusion stems from the Simple execution example. It uses fann_create_from_file("xor_float.net"); since this is not a device with a filesystem how...
http://leenissen.dk/fann/html/files/fann_error-h.html your error is "FANN_E_CANT_READ_TD"
I created a file to train and return 3 outputs, however as in the attached files when running the program I get a "FANN error 10" error, please does anyone know what I did wrong? Thanks
I created a file to train and return 3 outputs, however as in the attached files when running the program I get a "FANN error 10" error, please does anyone know what I did wrong? Thanks
change library version
Hi, i can't find documentation on how to scale input data before feed it to network using FANNCSharp lib. It has Scaleinput(DataAccessor dac), but how do i construct such object?
Hi, I have the same problem but no solutions are presented. How can I fix it?
that is not my project =) really the forum is almost dead for last 3 years
tyvm, perfect support at least for your project!
http://joelself.github.io/FannCSharp/files/NeuralNetFloat-cs.html https://mac-blog.org.ua/c-fann-example-raspoznavanie-s-ispolzovaniem-neyronnoy-seti/
sorry to bother one more time... I installed the nuget package for FANNCSharp, and i cant find any manual to use it... the first command line im typing is create_standard or fann_create_strandard or FANNCSharp_createstandard etc. nothing exists. Did you create a manual for the wrapper? regards
yes, np
yes
so what you mean is : if i have a 10pixels per 10pixel square image. black and white only. i will need 100 inputs node? and i pass to each node the status black or white (0-1 for example) to each input node ? thanks for your answer though!
hi, you can't just use image as input, you will need to translate image into X,Y coordinates with color in each cell, so it would be widthXheight inputs.
Hello, I am new to this but really interested in learning. Can i use this library for something like this for my own studies? - One INPUT image - One OUTPUT one single letter (i will begin with a small charset like 10 possible characters only) as you probably understood, the image is simply a rectangle with a single character in it. Maybe i will add some noise or objects around to harden the task. If it is possible, can you maybe give me some hints by poiting me to the correct starting point please?...
you will need to make additional program logic to distinct washing phases in your scenario. i think such approach to predict phase only by NN is very hard. but if washing phases power consumption is depend on previous phases, it is possible.
you will need to make additional program logic to distinct washing phases in your scenario. i think such approach to predict phase only by NN is very hard
I don't understand that. I'll try to concretize my idea: I have e.g. 50 runs of the washing machine with a specific power consumption over the time. Per run there are specific phases with different consumption (heating, cleaning, tumbling, ...). For learning I could give power consumption over time per run and maybe the cleaning program which was used (would be nice if it would be grouped by program automatically without giving program). Then when being used I only give power consumption of current...
maybe u can use amount of unwashed material for that? (<unwashed>/<washed>)*100.0
maybe u can use amount of unwashed material for that?
Yes, I mean consumption. It's the only data I have.
yes, it is possible using fann. but using historical data of power (consuption you mean?) is not the best. you need to make deeper research.
Hello, maybe a stupid or funny question: in relation to my home automation I'm thinking about how to find out when my washing machine will be ready doing the laundry. Now my idea was using a neuronal network to find out the state or ending time based on power data of historical runs and power data of actual run. The neuronal network had to recognize the program and forecast the ending time. Is it a possible use case that could be solved using libfann and how complicated would it be to do so? Thanks...
Hello, I just got started with FANN specifically Xor example. It was fun!. However I am not able follow the other examples and associated datasets because of the lack of documentation. Please provide or point me to the documentation for all of the FANN examples (mushrooms, robots, etc) and their datasets. Thanks, Sesha
Hi guys, I'm working on a simple audio-classification program, which is intended to indicate as accurately as possible whether a given snippet of audio is Speech or Music. My program extracts various features from the audio (fundamental frequency, loudness, rolloff frequency, etc) and uses them to build up histograms that I can then feed to my FANN neural networks as input data. My question is: am I likely to get better results by feeding all of my different types of feature-data into a Single Really...
Looking at the examples and the documentation, I see it's possible to train a neural network on some fixed data set. However, I would like to use a fitness function to train my NN. Is this possible? For example, if I wanted to make a NN to drive a toy car based on sensor inputs, I can't possible provide a set of training data, all I can do is judge how far the NN managed to drive the car. So, is it possible to train based on a fitness function? Many thanks Hugo
OpenCL?
Any updates? I couldn't get it to run on VS 2017 either.
as first, use fanntool before do any coding, to be sure you have correct train data
Hi all, I've constructed a simple neural network to learn to multiply numbers The network doesn't converge ( error is 8.83333333333...) and by training it decreses at the smallest decimals what can I do to fix that ? here is the code I used: import numpy as np def nonlin(x,deriv=False): if(deriv==True): return x*(1-x) return 1/(1+np.exp(-x)) X = np.array([[1,2,1], [5,2,2], [0,1,4], [3,3,1],[1,1,1],[5,5,1]]) y = np.array([[2], [20], [0], [9],[1],[25]]) np.random.seed(1) randomly initialize our weights...