SOP for running GPU correlator: By : Sanjay S. Kudale 30/01/2013. =============================== General information: ----------------------- GPU compute M/cs: 192.168.9.11 ( 192.168.4.75 , External world IP) 192.168.9.12 192.168.9.13 192.168.9.14 * Currently all GPU NODES equipped with C2050 cards FPGA contorl M/c: 192.168.4.68 ======================================================================= ONLINE m/c: shivneri 192.168.1.12 For Alternate ONLINE m/c (lenyadri), edit host file (in TERMINAL-1) /home/jroy/bin/SYS_FILES/hosts.dat, uncomment lenyadri. ======================================================================== Opening TERMINAL for various logins. ---------------------------------- TERMINAL-0: (FPGA control) ssh 192.168.4.68 -l gmrt cd harsha ======================================================================= TERMINAL-1: (GPU correlator main program) ssh jroy@192.168.4.75 -X cd ~/delay_cal/ ============================================================================= TERMINAL-3: (GPU CORR- sockcmd, connection to ONLINE) ssh jroy@192.168.4.75 -X cd ~/bin/ ============================================================================= TERMINAL-4: (collect, acq-record connection) ssh jroy@192.168.4.75 -X cd ~/bin/ ============================================================================= TERMINAL-5: (record, for data recording) ssh jroy@192.168.4.75 -X cd /home/jroy/USERS/harsha/psrdada/gmrt_gpu_corr_online/data/ ============================================================================= TERMINAL-6: (dassrv, communication between ONLINE-GPU CORR) ssh observer@192.168.1.12 -X cd /odisk/online1/gsbe/dassrv-gpu/ ============================================================================= TERMINAL-7,8,9: (ONLINE MASTER, USER0, USER4) (of standby ONLINE) ssh observer@192.168.1.12 -X It is assumed, ONLINE is running, master, user0, user4 are started and running in terminal -7, -8, -9 respectively. ============================================================================= PROCDURE: ---------------------------------------------- TERMINAL-1: edit $SYS_FILE/sampler.hdr for antenna connections. For 4 node cluster, keep no. of antennas = 8 For 2 node cluster, keep no. of antennas = 4 Follow the antenna connection seq. in sampler.hdr file as below. (2 antennas per ROACH mode) ANTE1-ROACH0 ANTE2-ROACH0 ANTE1-ROACH1 ANTE2-ROACH1 ANTE1-ROACH2 ANTE2-ROACH2 ANTE1-ROACH3 ANTE2-ROACH3 ---------------------------------------------- TERMINAL-0: ./dual_adc_pps_8bit_cluster.py ============================================== TERMINAL-1: ./mpirun_das.sh > mpirun_das.out ============================================= TERMINAL-3: ./sockcmd ============================================= TERMINAL-4: ./collect ============================================= TERMINAL-6: ./dassrv ============================================= # TERMINAL-8: (User0) # edit /temp2/data/gpu.hdr as per requirement. # In future, this file will be generated by GUI. # current version of standby ONLINE -GPU corr control, # give only those antenna in USER4 those are mentioned # in sampler.hdr # USER0 ante 8 5 3 9 10 29 30 8 25 cp 9;cmode 8;tpa(11)=1;initndas '/temp2/data/gpu.hdr' ============================================= # TERMINAL-0: ./dual_adc_pps_8bit_cluster.py ============================================= TERMINAL-8: #USER0 ante 8 5 3 9 10 29 30 8 25 cp 9;suba 4;prjtit'GPU';prjobs'GPU';initprj(1,'GPU') ============================================= TERMINAL-9: #USER4 tpa 317 317 255 255 62 62;prjfreq;lnkndasq;gts'3c48';strtndas ============================================= TERMINAL-5: ~/bin/record CODE filename int =============================================