The Syncfusion native Blazor components library offers 70+ UI and Data Viz web controls that are responsive and lightweight for building modern web apps.
.NET PDF framework is a high-performance and comprehensive library used to create, read, merge, split, secure, edit, view, and review PDF files in C#/VB.NET.
iam using three vms each one is windows server 2012 while installation of cluster i got the message (appeared in the following picture) he log file is attached can you verify it please thanks for help only 1.5 Gb still available on name node does that affect installation process or we should receive a message notifying us about size if it was the problem ? any other problem ?
BABaskaran Syncfusion Team October 31, 2016 07:24 AM UTC
Hi Oula,
Seems it was happened while updating cluster nodes details (cluster manager application related not Hadoop). Have you tried the reinstall option as shown in below option?
But as I recommended in another threads it is mandatory to have hardware specification as follows. If you didn’t meet the below specification, Hadoop cluster services may not work properly even after successful cluster creation. RAM related warnings will not be shown while creating cluster.
RAM
Local Disk
Number of machines or VMs needed
Studio
8 GB or higher
100 GB or higher
1
Local cluster
8 GB or higher
100 GB or higher
1
Multi node cluster
Name node - 8 GB or higher Data Node - 4 GB or higher
100 GB or higher
3 or higher
Regards,
Baskaran V
OAOula AlshiekhOctober 31, 2016 11:42 AM UTC
iam sorry it was my fault to say that it is still 1.5 Gb and didn't determine where this capacity in i didnt mean in Ram Capacity At All becuase you notified my before about this https://www.syncfusion.com/forums/127125/what-about-cluster-creation-failer
but you mentioned that you should have 100 GB Hard Disk but in troubleshooting page https://help.syncfusion.com/bigdata/cluster-manager/troubleshooting that at least 2 Gb is needed and and 50 GB is recommended for hard disk and what i have was 20 Gb for each node so i tried installation process but in never succeeded iwill try now with 80 Gb for name node and 20 Gb for others two nodes
thanks very much for the reply
OAOula AlshiekhOctober 31, 2016 03:20 PM UTC
i want to refer this state if you install cluster manager and then create a cluster with un completed installation process and let it called A then for some reason un install cluster manager and re install it again when you launch manager cluster A is not included so you will not be able to remove it and hadoop files will resist on all nodes what about this state
OAOula AlshiekhOctober 31, 2016 03:34 PM UTC
Another Question i want to verify this state please before installation as we know SyncFusion manager Installs its files when creating a new normal cluster in same path of agent installation path in all nodes you mentioned that hard disk capacity should be 100 GB for three nodes in normal cluster is there any problem if capacity in name node (80 GB)is available in disk E where syncfusion agent installation path is and in other two nodes it is available in C Disk where SyncFusion Installation path is
BABaskaran Syncfusion Team November 1, 2016 12:34 PM UTC
Hi Oula,
Please find the response as follows,
Query
Response
i want to refer this state if you install cluster manager and then create a cluster with un completed installation process and let it called A then for some reason un install cluster manager and re install it again when you launch manager cluster A is not included so you will not be able to remove it and hadoop files will resist on all nodes what about this state
Actual implementation of cluster formation is so that to automatically overwrite all existing Hadoop files when recreating same version of cluster on dead nodes (nodes with no Hadoop services currently running).
Alternatively, you can manually delete root Hadoop directory located in below location of each cluster nodes.
For name nodes: C:\Syncfusion\HadoopNode\<version>\
For data nodes: <all drives>:\Syncfusion\HadoopNode\<version>\
Once deleted the above mentioned Hadoop files, you can use the same nodes for new cluster creation.
Note:
If you face "ports are not available" issue when trying to create cluster with same nodes. Please refer the following solution,
i want to verify this state please before installation
as we know SyncFusion manager Installs its files when creating a new normal cluster in same path of agent installation path in all nodes
you mentioned that hard disk capacity should be 100 GB for three nodes in normal cluster
is there any problem if capacity in name node (80 GB)is available in disk E where syncfusion agent installation path is
and in other two nodes it is available in C Disk where SyncFusion Installation path is
Here I have explained the behavior so that you can understand the cluster creation and disk space usage.
In previous update I recommended you to keep 100 GB for local disk by considering both for Hadoop files and HDFS. As you know Syncfusion cluster manager will transfer Hadoop files to all cluster nodes, by default these Hadoop files may occupy up to 4 GB based on ecosystem on C drive.
e.g.
Name nodes will have Spark, Oozie and HBase by default when cluster creation itself whereas data nodes do not contain.
These Hadoop files' default installation location is Windows drive (In most cases, it is C drive) so it is mandatory to reserve up to 4 GB in C drive on each cluster nodes for Hadoop files alone. These files will present in the following location on cluster nodes,
C:\Syncfusion\HadoopNode\<version>\
After cluster creation we need enough free local disk space for HDFS to store data and run MapReduce jobs. For HDFS, Syncfusion Cluster will configure all available drive space.
e.g. If you have C and D drive in a cluster node, HDFS will make use of available free space in them. These files will present in
C:\Syncfusion\HadoopNode\<version>\Metadata\data
D:\Syncfusion\HadoopNode\<version>\Metadata\data
Note:
Name nodes will not be configured for HDFS, only data nodes will be configured and needs enough local disk space for HDFS.
Regards,
Baskaran V
OAOula AlshiekhNovember 6, 2016 10:26 AM UTC
thanks alot for clarifying SyncFusion Files Distributions on Drivers but i have this issue now as we know we can connect to a remote cluster through adding it's ip address or host name in big data studio->add clustor button i added a cluster by it's ip and every thing was ok but once i opened big data studio and tried to connect to the remote cluster by clicking connect button ,it has no response ,it is clicked but as it is not and also remove button only shows me Delete Confirmation Message but it never deletes the remote cluster from the list of added clusters should i have to reinstall big data studio and if so ,is there any tool that help me to export sqoop written jobs and hive tables in local cluster
RGRajasekar G Syncfusion Team November 7, 2016 02:10 PM UTC
Hi Oula,
Sorry for the inconvenience caused.
We are not able to reproduce the reported issue. Could you please share the log files from the following location? That would be very helpful for us to isolate the cause of the issue and provide solution earlier.
Could you also please share the Big Data Studio and Cluster manager version details?
Alternatively try the below steps to back up the Sqoop job details and Hive tables on reinstallation If you want to reinstall the Big Data Studio before we investigate the logs that you are going to provide.
Export Sqoop Written Jobs
We request you to back up the SqoopDataDetail.xml and SqoopConnectionDetail.xml files from following directory.
Note: we will format the Hadoop node while starting services on first time from service manager, so we suggest you to start and stop the services before replacement of data folder.
Please let us know if you have any queries.
Thanks,
Rajasekar
OAOula AlshiekhNovember 9, 2016 07:49 AM UTC
First of all Thanks very much for your cooperation in the attachment you will find the logs the version of big data cluster and big data studio is 3.1.0.3 instead of re installing big data studio i installed it on another machine that belongs to the same network of the cluster and every thing was ok but if i encountered the same problem i will think then of reinstalling big data studio thanks for re installation steps explanation
Please i have this argent issue i have a cluster of three nodes DataNode Crashed but after restarting it i was not able to restart all services again and the nodes appeared as dead i got the alert
Logs files in the node namenode of cluster OULA has exceeded
and i have only 1.73 Gb available in c disk in NameNode is that the reasone
i wanted to notify that i mean by datanode crashed that i only restarted the vm so nothing happened to it but as a result it will be dead in the cluster and after restarting the datanode vm i tried to start all services again but i wasn't able to
RGRajasekar G Syncfusion Team November 10, 2016 10:46 AM UTC
Hi
Oula,
Please
find the response as follows.
Query
Response
Regarding
log file size exceeded
As
you have low disk space in c drive, you got this message. By default, the
logs may occupy up to 5 GB for a node. As of now, we request you to manually
delete Hadoop logs from following location. (please delete logs in all nodes
before restarting Hadoop services)
C:\Syncfusion\HadoopNode\3.1.0.3\SDK\Hadoop\logs\
Regarding
restarting data node VM
In
current behavior first 3 nodes (journal nodes) should be started together to
maintain name nodes’ higher availability so that restarting the data node
alone (3rd node in your cluster) will cause dead state in it. But
if you have additional data nodes apart from first three nodes, they are
automatically restarted and added to cluster. This is already logged
internally and planned for next release to handle first 3 nodes auto restart
behavior as well.
With
your current cluster setup (3 node cluster), we request you to delete log
files as mentioned above and ensure all nodes(VMs) and agent services are
running and then start all services from cluster manager management UI as
shown in below screenshot.
Please
let us know if you have any queries.
Thanks,
Rajasekar
RGRajasekar G Syncfusion Team November 10, 2016 10:58 AM UTC
Hi Olua,
In our previous update screen shot not attached properly, Please find the screen shot below,
Thanks,
Rajasekar
OAOula AlshiekhNovember 10, 2016 02:58 PM UTC
i don't know if i understood well i understood that no one of the first three nodes can be crashed suppose that the name node is crashed(name node hosted vm is shutdown) can't the cluster continue working with secondary name node or if first data node crashed cant we move to another data node if it exists i want to make sure before trying this state i tried to stop the second datanode in the cluster that consists of four nodes and every thing was ok
BABaskaran Syncfusion Team November 11, 2016 11:14 AM UTC
Hi Oula,
We have created an incident for the reported issue. We will assist you through incident under your Direct Trac account.
Our Direct Trac support system can be accessed from the following link: