<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://t2bwiki.iihe.ac.be/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Stephane+GERARD</id>
	<title>T2B Wiki - User contributions [en-gb]</title>
	<link rel="self" type="application/atom+xml" href="https://t2bwiki.iihe.ac.be/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Stephane+GERARD"/>
	<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/Special:Contributions/Stephane_GERARD"/>
	<updated>2026-05-16T08:30:30Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.45.3</generator>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=BackupT2BCloud&amp;diff=722</id>
		<title>BackupT2BCloud</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=BackupT2BCloud&amp;diff=722"/>
		<updated>2016-08-30T14:27:14Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Backup of the OpenNebula database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Backup of VMs ==&lt;br /&gt;
The whole /var/lib/one directory is mounted from the volta fileserver, and regular scheduled snapshots are done automatically to tesla. You can access to these snapshots going through /var/lib/one/.zfs.&lt;br /&gt;
&lt;br /&gt;
== Backup of the OpenNebula database ==&lt;br /&gt;
The OpenNebula database being located in /var/lib/mysql, it is not backed up with the ZFS snapshots described in the previous section. That&#039;s why we have created a cron task to automatically do a regular dump of the mysql database into the /var/lib/one directory.&lt;br /&gt;
&lt;br /&gt;
To avoid having to specify the user and password in the mysqldump command, we have created the file ~/.my.cnf with the following content :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[mysqldump]&lt;br /&gt;
user=mysqluser&lt;br /&gt;
password=secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the script that does a dump into a file :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# Create the backup directory if it doesn&#039;t exist&lt;br /&gt;
BACKUPDIR=/var/lib/one/one_mysql_db_backups&lt;br /&gt;
mkdir -p $BACKUPDIR&lt;br /&gt;
&lt;br /&gt;
# Remove backups older than 7 days&lt;br /&gt;
find $BACKUPDIR -name &#039;one_db_mysqldump*&#039; -mtime +7 -exec rm -f {} \;&lt;br /&gt;
&lt;br /&gt;
# Make a dump of the db to a file in BACKUPDIR&lt;br /&gt;
DATE=`date +&#039;%d-%m-%y_%H:%M:%S&#039;`&lt;br /&gt;
FILENAME=$BACKUPDIR&amp;quot;/one_db_mysqldump_$DATE&amp;quot;&lt;br /&gt;
mysqldump opennebula &amp;gt; $FILENAME&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Dumps older than 7 days are removed.&lt;br /&gt;
&lt;br /&gt;
Here is the crontab command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
* 5,17 * * * /root/one_db_backup.sh &amp;gt; /var/log/one_db_backup.log 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=BackupT2BCloud&amp;diff=721</id>
		<title>BackupT2BCloud</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=BackupT2BCloud&amp;diff=721"/>
		<updated>2016-08-30T14:11:19Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Backup of the OpenNebula database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Backup of VMs ==&lt;br /&gt;
The whole /var/lib/one directory is mounted from the volta fileserver, and regular scheduled snapshots are done automatically to tesla. You can access to these snapshots going through /var/lib/one/.zfs.&lt;br /&gt;
&lt;br /&gt;
== Backup of the OpenNebula database ==&lt;br /&gt;
The OpenNebula database being located in /var/lib/mysql, it is not backed up with the ZFS snapshots described in the previous section. That&#039;s why we have created a cron task to automatically do a regular dump of the mysql database into the /var/lib/one directory.&lt;br /&gt;
&lt;br /&gt;
To avoid having to specify the user and password in the mysqldump command, we have created the file ~/.my.cnf with the following content :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[mysqldump]&lt;br /&gt;
user=mysqluser&lt;br /&gt;
password=secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the script that does a dump into a file :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# Create the backup directory if it doesn&#039;t exist&lt;br /&gt;
BACKUPDIR=/var/lib/one/one_mysql_db_backups&lt;br /&gt;
mkdir -p $BACKUPDIR&lt;br /&gt;
&lt;br /&gt;
# Remove backups older than 7 days&lt;br /&gt;
find $BACKUPDIR -name &#039;one_db_mysqldump*&#039; -mtime +7 -exec rm -f {} \;&lt;br /&gt;
&lt;br /&gt;
# Make a dump of the db to a file in BACKUPDIR&lt;br /&gt;
DATE=`date +&#039;%d-%m-%y_%H:%M:%S&#039;`&lt;br /&gt;
FILENAME=$BACKUPDIR&amp;quot;/one_db_mysqldump_$DATE&amp;quot;&lt;br /&gt;
mysqldump opennebula &amp;gt; $FILENAME&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Dumps older than 7 days are removed.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=BackupT2BCloud&amp;diff=720</id>
		<title>BackupT2BCloud</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=BackupT2BCloud&amp;diff=720"/>
		<updated>2016-08-30T13:42:04Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Backup of the OpenNebula database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Backup of VMs ==&lt;br /&gt;
The whole /var/lib/one directory is mounted from the volta fileserver, and regular scheduled snapshots are done automatically to tesla. You can access to these snapshots going through /var/lib/one/.zfs.&lt;br /&gt;
&lt;br /&gt;
== Backup of the OpenNebula database ==&lt;br /&gt;
The OpenNebula database being located in /var/lib/mysql, it is not backed up with the ZFS snapshots described in the previous section. That&#039;s why we have created a cron task to automatically do a regular dump of the mysql database into the /var/lib/one directory.&lt;br /&gt;
&lt;br /&gt;
To avoid having to specify the user and password in the mysqldump command, we have created the file ~/.my.cnf with the following content :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[mysqldump]&lt;br /&gt;
user=mysqluser&lt;br /&gt;
password=secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=BackupT2BCloud&amp;diff=719</id>
		<title>BackupT2BCloud</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=BackupT2BCloud&amp;diff=719"/>
		<updated>2016-08-30T13:35:23Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Backup of VMs ==&lt;br /&gt;
The whole /var/lib/one directory is mounted from the volta fileserver, and regular scheduled snapshots are done automatically to tesla. You can access to these snapshots going through /var/lib/one/.zfs.&lt;br /&gt;
&lt;br /&gt;
== Backup of the OpenNebula database ==&lt;br /&gt;
The OpenNebula database being located in /var/lib/mysql, it is not backed up with the ZFS snapshots described in the previous section. That&#039;s why we have created a cron task to automatically do a regular dump of the mysql database into the /var/lib/one directory.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=BackupT2BCloud&amp;diff=718</id>
		<title>BackupT2BCloud</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=BackupT2BCloud&amp;diff=718"/>
		<updated>2016-08-30T13:34:40Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: Created page with &amp;quot;= Backup of VMs = The whole /var/lib/one directory is mounted from the volta fileserver, and regular scheduled snapshots are done automatically to tesla. You can access to the...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Backup of VMs =&lt;br /&gt;
The whole /var/lib/one directory is mounted from the volta fileserver, and regular scheduled snapshots are done automatically to tesla. You can access to these snapshots going through /var/lib/one/.zfs.&lt;br /&gt;
&lt;br /&gt;
= Backup of the OpenNebula database =&lt;br /&gt;
The OpenNebula database being located in /var/lib/mysql, it is not backed up with the ZFS snapshots described in the previous section. That&#039;s why we have created a cron task to automatically do a regular dump of the mysql database into the /var/lib/one directory.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=717</id>
		<title>AdminPage</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=717"/>
		<updated>2016-08-30T13:25:08Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* T2B Cloud */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Page for Administrators}}&lt;br /&gt;
==== Management of the whole cluster ====&lt;br /&gt;
*[[elog]]&lt;br /&gt;
*[[ShutDownCluster| How to properly switch off the cluster]]&lt;br /&gt;
*[[PutClusterOn| How to properly put the cluster on]]&lt;br /&gt;
==== CMS Services ====&lt;br /&gt;
*[[Phedex]]&lt;br /&gt;
*[[Heartbeat]]&lt;br /&gt;
*[[LoadTest]]&lt;br /&gt;
*[[FroNTier]]&lt;br /&gt;
*[[ProdAgent]]&lt;br /&gt;
*[[GitForSiteConf| instructions to commit siteconf to git]]&lt;br /&gt;
==== Grid Configuration Issues ====&lt;br /&gt;
*[[UpdateCertificates| Update the certificates of all our machines]]&lt;br /&gt;
*[[CreamIssues| Issues with cream and how to solve them]]&lt;br /&gt;
*[[PBS_TMPDIR| PBS TMPDIR]]&lt;br /&gt;
*[[APEL| &amp;lt;strike&amp;gt;APEL&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[BDII]]&lt;br /&gt;
*[[FTS]]&lt;br /&gt;
*[[SL4_x86_64_WNs| &amp;lt;strike&amp;gt;SL4 x86_64 WNs&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[CE_oveloaded| CE overloaded]]&lt;br /&gt;
*[[RB]]&lt;br /&gt;
*[[IPMI]]&lt;br /&gt;
*[[CA_certificates| &amp;lt;strike&amp;gt;Upgrade CA certificates&amp;lt;/strike&amp;gt; (OBSOLETE)]]&lt;br /&gt;
*[[Shutdown| Shutting down the cluster]]&lt;br /&gt;
*[[Software_Area_Switch| Software Area Switch]]&lt;br /&gt;
*[[KernelUpdate| Kernel mandatory updates for critical vulnerabilities]]&lt;br /&gt;
*[[Argus| Argus server and glexec on the workernodes]]&lt;br /&gt;
*[[ApelGapPublishing| Apel gap publishing]]&lt;br /&gt;
*[[UpdateCACertificates| Update IGTF CA certificates]]&lt;br /&gt;
&lt;br /&gt;
==== Files section ====&lt;br /&gt;
*[[DCache| dCache]]&lt;br /&gt;
**[[DeleteObsoleteFiles| Find obsolete files]]&lt;br /&gt;
**[[DCacheAdminMode| dCache admin mode]]&lt;br /&gt;
**[[FindSizeOnDcache| find size of a directory on dcache]]&lt;br /&gt;
**[[DcachePoolConfig1912| dCache Pool Postinstallation steps]]&lt;br /&gt;
**[[DCacheMaxMovers| Adapt max mover]]&lt;br /&gt;
**[[pnfsScripts| scripts to see on what pools files in a directory reside and to move them to other pools]]&lt;br /&gt;
*[[OlPnfsFiles| Procedure for removal of old user files on pnfs]]&lt;br /&gt;
*[[GetLostFiles| Retrieve lost files from datasets]]&lt;br /&gt;
*[[StorageConsistency| Storage Consistency]]&lt;br /&gt;
&lt;br /&gt;
==== Status and Monitoring ====&lt;br /&gt;
*[[ReservedWNs| List of reserved WNs]]&lt;br /&gt;
*[[Todo| Todo-list]]&lt;br /&gt;
*[[Monitoring]]&lt;br /&gt;
*[[Plans-Schedule| Plans/Schedule]]&lt;br /&gt;
*[[Grid_Troubleshooting_link| Grid Troubleshooting link]]&lt;br /&gt;
*[[Incident_reports| Incident Reports]]&lt;br /&gt;
*[[Dissapeared_software| How to put the software back]]&lt;br /&gt;
*[[Bad_WN| What to do when a WN sends a &amp;quot;bad_wn.pl&amp;quot; email to grid_admin ?]]&lt;br /&gt;
*[[Nagios_installation| Nagios Installation at IIHE]]&lt;br /&gt;
*[[Restart_DCache| How to restart DCache ]]&lt;br /&gt;
&lt;br /&gt;
==== Info ====&lt;br /&gt;
*[[General_info| General info]]&lt;br /&gt;
*[[Installing_CMSSW| Installing CMSSW]]&lt;br /&gt;
*[[Installing_CRAB| Installing CRAB]]&lt;br /&gt;
*[[System_Benchmarks| System Benchmarks]]&lt;br /&gt;
*[[T2BTrac| T2B Trac config info]]&lt;br /&gt;
*[[HardWare| Hardware information]]&lt;br /&gt;
*[[NetworkSetup| Network Setup]]&lt;br /&gt;
*[[SetupMonitoringControlerSunfireV20z| Setup Monitoring of LSI Disk Controler on Sunfire V20z Server]]&lt;br /&gt;
*[[LDAP_UCL_IIHE| &amp;lt;strike&amp;gt;LDAP authentication system for the replication between UCL and IIHE sites&amp;lt;/strike&amp;gt; (OBSOLETE)]]&lt;br /&gt;
*[[GridAdminSurvivalGuide| IIHE Grid-admin survival guide]]&lt;br /&gt;
*[[Solaris| Solaris 10]]&lt;br /&gt;
*[[SolarisSSD| Adding an SSD card and configuring RAID, zpools, filesystems and shares on the new Solaris fileserver]]&lt;br /&gt;
*[[LinuxAdminTricks| Linux tricks for admins]]&lt;br /&gt;
*[[CrabLocalPbsSubmission| &amp;lt;strike&amp;gt;How to implement local PBS submission with CRAB ?&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[AddNewUserFromUCLToLDAP| &amp;lt;strike&amp;gt;How to create an account for a CMS user from UCL ?&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[OSErrata| &amp;lt;strike&amp;gt;Deploying OS errata&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[BenchmarkHEPSPEC06| Howto benchmark a node with HEPSPEC06]]&lt;br /&gt;
*[[Installing_dcache_pool| Install a new dCache pool]]&lt;br /&gt;
*[[BackupUsersHomeDirs| Backup of the users home dirs on Jefke]]&lt;br /&gt;
*[[MonWebServicesMigration| Migration of mon and its Web services]]&lt;br /&gt;
*[[HOWTORestartNagiosTest| HOWTO restart a nagios test manually]]&lt;br /&gt;
*[[CompileAndInstallRoot| Compile and install ROOT]]&lt;br /&gt;
*[[CleanCreamdb| Clean creamdb]]&lt;br /&gt;
*Reboot campaign for the workernodes :&lt;br /&gt;
**[[KernelUpdate| Reboot after a kernel update]]&lt;br /&gt;
**[[UpgradeWNstoSL5.5| Reboot after an OS upgrade]]&lt;br /&gt;
*[[ManageAllAdminScriptsWithGit| Central management of all the admin scripts with Git]]&lt;br /&gt;
*[[ConfigProxyCvmfs| Configuration of a proxy for CVMFS]]&lt;br /&gt;
**[[RecoverCvmfs| How to recover CVMFS]]&lt;br /&gt;
*[[TestNFSPerformance| How to test NFS Performance]]&lt;br /&gt;
*[[TetexNotAvailableInSL6| Alternatives to Tetex]]&lt;br /&gt;
*[[NewMethodUpdateKernelWorkernodes| A new easy method to update kernel on the workernodes]]&lt;br /&gt;
*[[AutomaticMailSendingFromCluster| About automatic mail sending from the cluster]]&lt;br /&gt;
*[[T2BTracAccess| T2B Trac access configuration]]&lt;br /&gt;
*[[WorkingWithRHEL7| Surviving to RHEL7]]&lt;br /&gt;
*[[CCMWithKerberos| Experimental : Securing profiles with Kerberos]]&lt;br /&gt;
*[[MigrateToMediaWiki| Migration of T2B Wiki from Trac to MediaWiki]]&lt;br /&gt;
*[[motd|Message Of The Day (motd)]]&lt;br /&gt;
*[[LToS| Support of Long-tail of Science]]&lt;br /&gt;
&lt;br /&gt;
==== Quattor ====&lt;br /&gt;
*[http://quattor.begrid.be/trac/centralised-begrid-v5/wiki/BEgridAndQuattor &amp;lt;strike&amp;gt;BEgrid wiki&amp;lt;/strike&amp;gt;(OBSOLETE)]&lt;br /&gt;
*[[Test_things| &amp;lt;strike&amp;gt;Test things&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[Lemon_installation| &amp;lt;strike&amp;gt;Lemon installation&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[QuattorPointers| &amp;lt;strike&amp;gt;Pointers&amp;lt;/strike&amp;gt;]]&amp;lt;strike&amp;gt; to more in-depth information on quattor&amp;lt;/strike&amp;gt;(OBSOLETE)&lt;br /&gt;
*[[AddingMachineToCluster| &amp;lt;strike&amp;gt;Adding&amp;lt;/strike&amp;gt;]]&amp;lt;strike&amp;gt; a new machine to the cluster&amp;lt;/strike&amp;gt;(OBSOLETE)&lt;br /&gt;
*[[AutomaticMachineTemplateGeneration| Automatic generation of hardware and profile templates for new workernodes]]&lt;br /&gt;
*[[InstallationBEgridClient0| Installation of a Quattor deployment server release 13.1]]&lt;br /&gt;
*[[InstallFilesNewOS| How to add a new OS to the Quattor Repository]]&lt;br /&gt;
*[[GenerateRPMFromATagInGithub| How to build an RPM from a tag in Github]]&lt;br /&gt;
*[[HowtoMigrateWNToCB9| How to migrate workernodes from CB8 to CB9]]&lt;br /&gt;
*[[WorkingInCB9| Working in CB9 (Quattor release &amp;gt;= 14.2)]]&lt;br /&gt;
*[[AideMemoire| FAQ - Aide-mémoire - Howtos]]&lt;br /&gt;
*[[BuildANewPysvnOnAiiServer| Howto build a new pysvn on a SL63 AII server]]&lt;br /&gt;
*[[QuattorFreeIPA| Quattor and FreeIPA]]&lt;br /&gt;
*[[NewRuncheck| Rewrite of the runcheck script in Perl]]&lt;br /&gt;
*[[HardDisksManagement| Hard disks management]]&lt;br /&gt;
*[[Aquilon| Aquilon]]&lt;br /&gt;
&lt;br /&gt;
==== KVM virtualization ====&lt;br /&gt;
*[[VirtWithKVM| Virtualization of the new CREAM-CE on dom02 with KVM]]&lt;br /&gt;
*[[VirtWithKVM1| Installation of the new virtualization server dom04]]&lt;br /&gt;
*[[CreateVM| Easy creation of virtual machines]]&lt;br /&gt;
*[[MonitoringvHostswithGanglia| Monitoring the KVM vHosts with Ganglia]]&lt;br /&gt;
&lt;br /&gt;
==== T2B Cloud ====&lt;br /&gt;
*[[MigrationToOpenNebula| Transforming the KVM hypervisors farm into an OpenNebula cloud]]&lt;br /&gt;
*[[WorkingInT2BCloud| Working in the T2B cloud]]&lt;br /&gt;
*[[MigrateDBMySQL| Migrate one DB from sqlite to mysql]]&lt;br /&gt;
*[[BackupT2BCloud| Backup of the T2B Cloud]]&lt;br /&gt;
&lt;br /&gt;
==== gUSE/WS-PGRADE portal ====&lt;br /&gt;
*[[PortalInstall| Portal installation]]&lt;br /&gt;
*[[PortalConfig| Portal configuration]]&lt;br /&gt;
*[[PortalOperations| Portal operations]]&lt;br /&gt;
&lt;br /&gt;
==== Migration to EMI-3 ====&lt;br /&gt;
*[[MigrateBEgridToEMI3_part1| BEgrid facilities - Part 1]]&lt;br /&gt;
*[[MigrateBEgridToEMI3_part2| BEgrid facilities - Part 2]]&lt;br /&gt;
&lt;br /&gt;
==== XEN ====&lt;br /&gt;
*[[Manage_XEN| Manage XEN]]&lt;br /&gt;
*[[XenQuattor| Xen and Quattor]]&lt;br /&gt;
&lt;br /&gt;
==== CEPH ====&lt;br /&gt;
*[[UnderstandingCeph| Understanding Ceph]]&lt;br /&gt;
*[[InstallCephWithQuattor| Installing Ceph with Quattor]]&lt;br /&gt;
*[[ExperimentsWithCeph| Experiments with Ceph]]&lt;br /&gt;
*[[CephBasics| Operating a Ceph cluster]]&lt;br /&gt;
&lt;br /&gt;
==== Logstash / Elasticsearch / Kibana (ELK) ====&lt;br /&gt;
machine: log10 | [http://log10.iihe.ac.be/index.html interface]  |  [http://log10.iihe.ac.be/HQ index manager]&lt;br /&gt;
* [[log_forwarding_with_quattor|Forwarding a log with rsyslog to logstash using quattor]]&lt;br /&gt;
* [[log_parsing_with_logstash|Parsing the logs with logstash]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Network ====&lt;br /&gt;
* [[network_bond_and_tag|Bonding of 2 interfaces + tagging of 2 vlans on the bond (PRIV+PUB)|]]&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=AideMemoire&amp;diff=711</id>
		<title>AideMemoire</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=AideMemoire&amp;diff=711"/>
		<updated>2016-08-02T14:19:58Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* How to use ncm-metaconfig ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=== How to use ncm-metaconfig ? ===&lt;br /&gt;
The official documentation can be found [https://github.com/quattor/configuration-modules-core/blob/master/ncm-metaconfig/src/main/perl/metaconfig.pod here].&lt;br /&gt;
&lt;br /&gt;
Here is an example ready to be included into a machine profile for test purposes :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# first, you need to deploy the tt file on the machine&lt;br /&gt;
# (the standard way is to include tt files in the the ncm-metaconfig rpm....)&lt;br /&gt;
variable CONTENTS = &amp;lt;&amp;lt;EOF;&lt;br /&gt;
name = {&lt;br /&gt;
[% FILTER indent -%]&lt;br /&gt;
hosts = [% hosts.join(&#039;,&#039;) %]&lt;br /&gt;
port = [% port %]&lt;br /&gt;
master = [% master ? &amp;quot;TRUE&amp;quot; : &amp;quot;FALSE&amp;quot; %]&lt;br /&gt;
description = &amp;quot;[% description %]&amp;quot;&lt;br /&gt;
[%     IF option.defined -%]&lt;br /&gt;
option = &amp;quot;[% option %]&amp;quot;&lt;br /&gt;
[%     END -%]&lt;br /&gt;
[% END -%]&lt;br /&gt;
}&lt;br /&gt;
EOF&lt;br /&gt;
&#039;/software/components/filecopy/services&#039; = npush(&lt;br /&gt;
	escape(&#039;/usr/share/templates/quattor/metaconfig/example/main.tt&#039;), nlist(&#039;config&#039;, CONTENTS, &#039;perms&#039;, &#039;0644&#039;)&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
# below, the real metaconfig work&lt;br /&gt;
include &#039;components/metaconfig/config&#039;;&lt;br /&gt;
include &#039;metaconfig/example/config&#039;;&lt;br /&gt;
prefix &#039;/software/components/metaconfig/services/{/etc/example/exampled.conf}/contents&#039;;&lt;br /&gt;
&#039;hosts&#039; = list(&#039;server1&#039;, &#039;server3&#039;);&lt;br /&gt;
&#039;port&#039; = 800;&lt;br /&gt;
&#039;master&#039; = false;&lt;br /&gt;
&#039;description&#039; = &#039;My example&#039;;&lt;br /&gt;
&lt;br /&gt;
# the tt file must be created before ncm-metaconfig runs&lt;br /&gt;
&#039;/software/components/metaconfig/dependencies/pre&#039; = push(&#039;filecopy&#039;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For this example to work, you need a directory metaconfig/example in your site, with the following content :&lt;br /&gt;
&lt;br /&gt;
* config.pan :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
unique template metaconfig/example/config;&lt;br /&gt;
&lt;br /&gt;
include {&#039;metaconfig/example/schema&#039;};&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
bind &amp;quot;/software/components/metaconfig/services/{/etc/example/exampled.conf}/contents&amp;quot; = example_service;&lt;br /&gt;
&lt;br /&gt;
prefix &amp;quot;/software/components/metaconfig/services/{/etc/example/exampled.conf}&amp;quot;;&lt;br /&gt;
&amp;quot;daemon&amp;quot; = list(&amp;quot;exampled&amp;quot;);&lt;br /&gt;
&amp;quot;module&amp;quot; = &amp;quot;example/main&amp;quot;;&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* schema.pan :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
declaration template metaconfig/example/schema;&lt;br /&gt;
&lt;br /&gt;
include { &#039;pan/types&#039; };&lt;br /&gt;
&lt;br /&gt;
type example_service = {&lt;br /&gt;
    &#039;hosts&#039; :  type_hostname[]&lt;br /&gt;
    &#039;port&#039; : type_port&lt;br /&gt;
    &#039;master&#039; : boolean&lt;br /&gt;
    &#039;description&#039; : string&lt;br /&gt;
    &#039;option&#039; ? string&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{TracNotice|{{PAGENAME}}}}&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=AideMemoire&amp;diff=710</id>
		<title>AideMemoire</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=AideMemoire&amp;diff=710"/>
		<updated>2016-08-02T14:11:45Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* How to use ncm-metaconfig ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=== How to use ncm-metaconfig ? ===&lt;br /&gt;
The official documentation can be found [https://github.com/quattor/configuration-modules-core/blob/master/ncm-metaconfig/src/main/perl/metaconfig.pod here].&lt;br /&gt;
&lt;br /&gt;
Here is an example ready to be included into a machine profile for test purposes :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# first, you need to deploy the tt file on the machine&lt;br /&gt;
# (the standard way is to include tt files in the the ncm-metaconfig rpm....)&lt;br /&gt;
variable CONTENTS = &amp;lt;&amp;lt;EOF;&lt;br /&gt;
name = {&lt;br /&gt;
[% FILTER indent -%]&lt;br /&gt;
hosts = [% hosts.join(&#039;,&#039;) %]&lt;br /&gt;
port = [% port %]&lt;br /&gt;
master = [% master ? &amp;quot;TRUE&amp;quot; : &amp;quot;FALSE&amp;quot; %]&lt;br /&gt;
description = &amp;quot;[% description %]&amp;quot;&lt;br /&gt;
[%     IF option.defined -%]&lt;br /&gt;
option = &amp;quot;[% option %]&amp;quot;&lt;br /&gt;
[%     END -%]&lt;br /&gt;
[% END -%]&lt;br /&gt;
}&lt;br /&gt;
EOF&lt;br /&gt;
&#039;/software/components/filecopy/services&#039; = npush(&lt;br /&gt;
	escape(&#039;/usr/share/templates/quattor/metaconfig/example/main.tt&#039;), nlist(&#039;config&#039;, CONTENTS, &#039;perms&#039;, &#039;0644&#039;)&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
# below, the real metaconfig work&lt;br /&gt;
include &#039;components/metaconfig/config&#039;;&lt;br /&gt;
include &#039;metaconfig/example/config&#039;;&lt;br /&gt;
prefix &#039;/software/components/metaconfig/services/{/etc/example/exampled.conf}/contents&#039;;&lt;br /&gt;
&#039;hosts&#039; = list(&#039;server1&#039;, &#039;server3&#039;);&lt;br /&gt;
&#039;port&#039; = 800;&lt;br /&gt;
&#039;master&#039; = false;&lt;br /&gt;
&#039;description&#039; = &#039;My example&#039;;&lt;br /&gt;
&lt;br /&gt;
# the tt file must be created before ncm-metaconfig runs&lt;br /&gt;
&#039;/software/components/metaconfig/dependencies/pre&#039; = push(&#039;filecopy&#039;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{TracNotice|{{PAGENAME}}}}&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=MigrateDBMySQL&amp;diff=697</id>
		<title>MigrateDBMySQL</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=MigrateDBMySQL&amp;diff=697"/>
		<updated>2016-06-22T15:21:06Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: Created page with &amp;quot;== Convert an OpenNebula DB from SQLite to MySQL == *The recipe described here [http://vadikgo.tumblr.com/post/34325489321/convert-an-opennebula-db-from-sqlite-to-mysql] was t...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Convert an OpenNebula DB from SQLite to MySQL ==&lt;br /&gt;
*The recipe described here [http://vadikgo.tumblr.com/post/34325489321/convert-an-opennebula-db-from-sqlite-to-mysql] was tested on the VSC-VUB cloud (sl6x, opennebula 4.12), and it works.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=696</id>
		<title>AdminPage</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=696"/>
		<updated>2016-06-22T15:17:52Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* T2B Cloud */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Page for Administrators}}&lt;br /&gt;
==== Management of the whole cluster ====&lt;br /&gt;
*[[elog]]&lt;br /&gt;
*[[ShutDownCluster| How to properly switch off the cluster]]&lt;br /&gt;
*[[PutClusterOn| How to properly put the cluster on]]&lt;br /&gt;
==== CMS Services ====&lt;br /&gt;
*[[Phedex]]&lt;br /&gt;
*[[Heartbeat]]&lt;br /&gt;
*[[LoadTest]]&lt;br /&gt;
*[[FroNTier]]&lt;br /&gt;
*[[ProdAgent]]&lt;br /&gt;
*[[GitForSiteConf| instructions to commit siteconf to git]]&lt;br /&gt;
==== Grid Configuration Issues ====&lt;br /&gt;
*[[UpdateCertificates| Update the certificates of all our machines]]&lt;br /&gt;
*[[CreamIssues| Issues with cream and how to solve them]]&lt;br /&gt;
*[[PBS_TMPDIR| PBS TMPDIR]]&lt;br /&gt;
*[[APEL| &amp;lt;strike&amp;gt;APEL&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[BDII]]&lt;br /&gt;
*[[FTS]]&lt;br /&gt;
*[[SL4_x86_64_WNs| &amp;lt;strike&amp;gt;SL4 x86_64 WNs&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[CE_oveloaded| CE overloaded]]&lt;br /&gt;
*[[RB]]&lt;br /&gt;
*[[IPMI]]&lt;br /&gt;
*[[CA_certificates| &amp;lt;strike&amp;gt;Upgrade CA certificates&amp;lt;/strike&amp;gt; (OBSOLETE)]]&lt;br /&gt;
*[[Shutdown| Shutting down the cluster]]&lt;br /&gt;
*[[Software_Area_Switch| Software Area Switch]]&lt;br /&gt;
*[[KernelUpdate| Kernel mandatory updates for critical vulnerabilities]]&lt;br /&gt;
*[[Argus| Argus server and glexec on the workernodes]]&lt;br /&gt;
*[[ApelGapPublishing| Apel gap publishing]]&lt;br /&gt;
*[[UpdateCACertificates| Update IGTF CA certificates]]&lt;br /&gt;
&lt;br /&gt;
==== Files section ====&lt;br /&gt;
*[[DCache| dCache]]&lt;br /&gt;
**[[DeleteObsoleteFiles| Find obsolete files]]&lt;br /&gt;
**[[DCacheAdminMode| dCache admin mode]]&lt;br /&gt;
**[[FindSizeOnDcache| find size of a directory on dcache]]&lt;br /&gt;
**[[DcachePoolConfig1912| dCache Pool Postinstallation steps]]&lt;br /&gt;
**[[DCacheMaxMovers| Adapt max mover]]&lt;br /&gt;
**[[pnfsScripts| scripts to see on what pools files in a directory reside and to move them to other pools]]&lt;br /&gt;
*[[OlPnfsFiles| Procedure for removal of old user files on pnfs]]&lt;br /&gt;
*[[GetLostFiles| Retrieve lost files from datasets]]&lt;br /&gt;
*[[StorageConsistency| Storage Consistency]]&lt;br /&gt;
&lt;br /&gt;
==== Status and Monitoring ====&lt;br /&gt;
*[[ReservedWNs| List of reserved WNs]]&lt;br /&gt;
*[[Todo| Todo-list]]&lt;br /&gt;
*[[Monitoring]]&lt;br /&gt;
*[[Plans-Schedule| Plans/Schedule]]&lt;br /&gt;
*[[Grid_Troubleshooting_link| Grid Troubleshooting link]]&lt;br /&gt;
*[[Incident_reports| Incident Reports]]&lt;br /&gt;
*[[Dissapeared_software| How to put the software back]]&lt;br /&gt;
*[[Bad_WN| What to do when a WN sends a &amp;quot;bad_wn.pl&amp;quot; email to grid_admin ?]]&lt;br /&gt;
*[[Nagios_installation| Nagios Installation at IIHE]]&lt;br /&gt;
*[[Restart_DCache| How to restart DCache ]]&lt;br /&gt;
&lt;br /&gt;
==== Info ====&lt;br /&gt;
*[[General_info| General info]]&lt;br /&gt;
*[[Installing_CMSSW| Installing CMSSW]]&lt;br /&gt;
*[[Installing_CRAB| Installing CRAB]]&lt;br /&gt;
*[[System_Benchmarks| System Benchmarks]]&lt;br /&gt;
*[[T2BTrac| T2B Trac config info]]&lt;br /&gt;
*[[HardWare| Hardware information]]&lt;br /&gt;
*[[NetworkSetup| Network Setup]]&lt;br /&gt;
*[[SetupMonitoringControlerSunfireV20z| Setup Monitoring of LSI Disk Controler on Sunfire V20z Server]]&lt;br /&gt;
*[[LDAP_UCL_IIHE| &amp;lt;strike&amp;gt;LDAP authentication system for the replication between UCL and IIHE sites&amp;lt;/strike&amp;gt; (OBSOLETE)]]&lt;br /&gt;
*[[GridAdminSurvivalGuide| IIHE Grid-admin survival guide]]&lt;br /&gt;
*[[Solaris| Solaris 10]]&lt;br /&gt;
*[[SolarisSSD| Adding an SSD card and configuring RAID, zpools, filesystems and shares on the new Solaris fileserver]]&lt;br /&gt;
*[[LinuxAdminTricks| Linux tricks for admins]]&lt;br /&gt;
*[[CrabLocalPbsSubmission| &amp;lt;strike&amp;gt;How to implement local PBS submission with CRAB ?&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[AddNewUserFromUCLToLDAP| &amp;lt;strike&amp;gt;How to create an account for a CMS user from UCL ?&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[OSErrata| &amp;lt;strike&amp;gt;Deploying OS errata&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[BenchmarkHEPSPEC06| Howto benchmark a node with HEPSPEC06]]&lt;br /&gt;
*[[Installing_dcache_pool| Install a new dCache pool]]&lt;br /&gt;
*[[BackupUsersHomeDirs| Backup of the users home dirs on Jefke]]&lt;br /&gt;
*[[MonWebServicesMigration| Migration of mon and its Web services]]&lt;br /&gt;
*[[HOWTORestartNagiosTest| HOWTO restart a nagios test manually]]&lt;br /&gt;
*[[CompileAndInstallRoot| Compile and install ROOT]]&lt;br /&gt;
*[[CleanCreamdb| Clean creamdb]]&lt;br /&gt;
*Reboot campaign for the workernodes :&lt;br /&gt;
**[[KernelUpdate| Reboot after a kernel update]]&lt;br /&gt;
**[[UpgradeWNstoSL5.5| Reboot after an OS upgrade]]&lt;br /&gt;
*[[ManageAllAdminScriptsWithGit| Central management of all the admin scripts with Git]]&lt;br /&gt;
*[[ConfigProxyCvmfs| Configuration of a proxy for CVMFS]]&lt;br /&gt;
**[[RecoverCvmfs| How to recover CVMFS]]&lt;br /&gt;
*[[TestNFSPerformance| How to test NFS Performance]]&lt;br /&gt;
*[[TetexNotAvailableInSL6| Alternatives to Tetex]]&lt;br /&gt;
*[[NewMethodUpdateKernelWorkernodes| A new easy method to update kernel on the workernodes]]&lt;br /&gt;
*[[AutomaticMailSendingFromCluster| About automatic mail sending from the cluster]]&lt;br /&gt;
*[[T2BTracAccess| T2B Trac access configuration]]&lt;br /&gt;
*[[WorkingWithRHEL7| Surviving to RHEL7]]&lt;br /&gt;
*[[CCMWithKerberos| Experimental : Securing profiles with Kerberos]]&lt;br /&gt;
*[[MigrateToMediaWiki| Migration of T2B Wiki from Trac to MediaWiki]]&lt;br /&gt;
*[[motd|Message Of The Day (motd)]]&lt;br /&gt;
*[[LToS| Support of Long-tail of Science]]&lt;br /&gt;
&lt;br /&gt;
==== Quattor ====&lt;br /&gt;
*[http://quattor.begrid.be/trac/centralised-begrid-v5/wiki/BEgridAndQuattor &amp;lt;strike&amp;gt;BEgrid wiki&amp;lt;/strike&amp;gt;(OBSOLETE)]&lt;br /&gt;
*[[Test_things| &amp;lt;strike&amp;gt;Test things&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[Lemon_installation| &amp;lt;strike&amp;gt;Lemon installation&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[QuattorPointers| &amp;lt;strike&amp;gt;Pointers&amp;lt;/strike&amp;gt;]]&amp;lt;strike&amp;gt; to more in-depth information on quattor&amp;lt;/strike&amp;gt;(OBSOLETE)&lt;br /&gt;
*[[AddingMachineToCluster| &amp;lt;strike&amp;gt;Adding&amp;lt;/strike&amp;gt;]]&amp;lt;strike&amp;gt; a new machine to the cluster&amp;lt;/strike&amp;gt;(OBSOLETE)&lt;br /&gt;
*[[AutomaticMachineTemplateGeneration| Automatic generation of hardware and profile templates for new workernodes]]&lt;br /&gt;
*[[InstallationBEgridClient0| Installation of a Quattor deployment server release 13.1]]&lt;br /&gt;
*[[InstallFilesNewOS| How to add a new OS to the Quattor Repository]]&lt;br /&gt;
*[[GenerateRPMFromATagInGithub| How to build an RPM from a tag in Github]]&lt;br /&gt;
*[[HowtoMigrateWNToCB9| How to migrate workernodes from CB8 to CB9]]&lt;br /&gt;
*[[WorkingInCB9| Working in CB9 (Quattor release &amp;gt;= 14.2)]]&lt;br /&gt;
*[[AideMemoire| FAQ - Aide-mémoire - Howtos]]&lt;br /&gt;
*[[BuildANewPysvnOnAiiServer| Howto build a new pysvn on a SL63 AII server]]&lt;br /&gt;
*[[QuattorFreeIPA| Quattor and FreeIPA]]&lt;br /&gt;
*[[NewRuncheck| Rewrite of the runcheck script in Perl]]&lt;br /&gt;
*[[HardDisksManagement| Hard disks management]]&lt;br /&gt;
*[[Aquilon| Aquilon]]&lt;br /&gt;
&lt;br /&gt;
==== KVM virtualization ====&lt;br /&gt;
*[[VirtWithKVM| Virtualization of the new CREAM-CE on dom02 with KVM]]&lt;br /&gt;
*[[VirtWithKVM1| Installation of the new virtualization server dom04]]&lt;br /&gt;
*[[CreateVM| Easy creation of virtual machines]]&lt;br /&gt;
*[[MonitoringvHostswithGanglia| Monitoring the KVM vHosts with Ganglia]]&lt;br /&gt;
&lt;br /&gt;
==== T2B Cloud ====&lt;br /&gt;
*[[MigrationToOpenNebula| Transforming the KVM hypervisors farm into an OpenNebula cloud]]&lt;br /&gt;
*[[WorkingInT2BCloud| Working in the T2B cloud]]&lt;br /&gt;
*[[MigrateDBMySQL| Migrate one DB from sqlite to mysql]]&lt;br /&gt;
&lt;br /&gt;
==== gUSE/WS-PGRADE portal ====&lt;br /&gt;
*[[PortalInstall| Portal installation]]&lt;br /&gt;
*[[PortalConfig| Portal configuration]]&lt;br /&gt;
*[[PortalOperations| Portal operations]]&lt;br /&gt;
&lt;br /&gt;
==== Migration to EMI-3 ====&lt;br /&gt;
*[[MigrateBEgridToEMI3_part1| BEgrid facilities - Part 1]]&lt;br /&gt;
*[[MigrateBEgridToEMI3_part2| BEgrid facilities - Part 2]]&lt;br /&gt;
&lt;br /&gt;
==== XEN ====&lt;br /&gt;
*[[Manage_XEN| Manage XEN]]&lt;br /&gt;
*[[XenQuattor| Xen and Quattor]]&lt;br /&gt;
&lt;br /&gt;
==== CEPH ====&lt;br /&gt;
*[[UnderstandingCeph| Understanding Ceph]]&lt;br /&gt;
*[[InstallCephWithQuattor| Installing Ceph with Quattor]]&lt;br /&gt;
*[[ExperimentsWithCeph| Experiments with Ceph]]&lt;br /&gt;
*[[CephBasics| Operating a Ceph cluster]]&lt;br /&gt;
&lt;br /&gt;
==== Logstash / Elasticsearch / Kibana (ELK) ====&lt;br /&gt;
machine: log10 | [http://log10.iihe.ac.be/index.html interface]  |  [http://log10.iihe.ac.be/HQ index manager]&lt;br /&gt;
* [[log_forwarding_with_quattor|Forwarding a log with rsyslog to logstash using quattor]]&lt;br /&gt;
* [[log_parsing_with_logstash|Parsing the logs with logstash]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Network ====&lt;br /&gt;
* [[network_bond_and_tag|Bonding of 2 interfaces + tagging of 2 vlans on the bond (PRIV+PUB)|]]&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=692</id>
		<title>LToS</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=692"/>
		<updated>2016-05-19T12:57:46Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Configuration of the CE ==&lt;br /&gt;
This [https://wiki.egi.eu/wiki/MAN12 link] explains how to set up the PUSP mechanism on the CE. However, if you apply these recipes to the letter, it will break the CE. Here are the actual configurations we have applied :&lt;br /&gt;
* /etc/glexec.conf&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[glexec]&lt;br /&gt;
create_target_proxy=no&lt;br /&gt;
lcas_db_file=/etc/lcas/lcas-glexec.db&lt;br /&gt;
lcas_debug_level=5&lt;br /&gt;
lcas_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcas_log_level=5&lt;br /&gt;
lcmaps_db_file=/etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
lcmaps_debug_level=5&lt;br /&gt;
lcmaps_get_account_policy=combi_mapping&lt;br /&gt;
lcmaps_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcmaps_log_level=5&lt;br /&gt;
lcmaps_voms_verification=no&lt;br /&gt;
linger=no&lt;br /&gt;
log_destination=file&lt;br /&gt;
log_file=/var/log/glexec/glexec.log&lt;br /&gt;
log_level=5&lt;br /&gt;
omission_private_key_white_list=tomcat&lt;br /&gt;
preserve_env_variables=&lt;br /&gt;
silent_logging=no&lt;br /&gt;
use_lcas=no&lt;br /&gt;
user_identity_switch_by=lcmaps&lt;br /&gt;
user_white_list=tomcat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* /etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
path = /usr/lib64/lcmaps&lt;br /&gt;
&lt;br /&gt;
vomspoolaccount = &amp;quot;lcmaps_voms_poolaccount.mod&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
                       &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalgroup = &amp;quot;lcmaps_voms_localgroup.mod&amp;quot;&lt;br /&gt;
                      &amp;quot;-groupmapfile /etc/lcmaps/groupmapfile&amp;quot;&lt;br /&gt;
                      &amp;quot;-mapmin 0 &amp;quot;&lt;br /&gt;
&lt;br /&gt;
proxycheck = &amp;quot;lcmaps_verify_proxy.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-certdir /etc/grid-security/certificates&amp;quot;&lt;br /&gt;
                  &amp;quot;--allow-limited-proxy&amp;quot;&lt;br /&gt;
&lt;br /&gt;
posixenf = &amp;quot;lcmaps_posix_enf.mod&amp;quot;&lt;br /&gt;
                &amp;quot;-maxuid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxpgid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxsgid 32&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalaccount = &amp;quot;lcmaps_voms_localaccount.mod&amp;quot;&lt;br /&gt;
                        &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                        &amp;quot;-use_voms_gid&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_pool = &amp;quot;lcmaps_robot_poolaccount.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapdir /etc/grid-security/gridmapdir/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
poolaccount = &amp;quot;lcmaps_poolaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_ban_dn = &amp;quot;lcmaps_robot_ban_dn.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
localaccount = &amp;quot;lcmaps_localaccount.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ban_dn = &amp;quot;lcmaps_ban_dn.mod&amp;quot;&lt;br /&gt;
              &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_local = &amp;quot;lcmaps_robot_localaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Policies:&lt;br /&gt;
voms:&lt;br /&gt;
proxycheck -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
standard:&lt;br /&gt;
proxycheck -&amp;gt; localaccount&lt;br /&gt;
localaccount -&amp;gt; posixenf | poolaccount&lt;br /&gt;
poolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
combi_mapping:&lt;br /&gt;
ban_dn -&amp;gt; robot_ban_dn&lt;br /&gt;
robot_ban_dn -&amp;gt; proxycheck&lt;br /&gt;
proxycheck -&amp;gt; robot_pool&lt;br /&gt;
~robot_pool -&amp;gt; robot_local&lt;br /&gt;
~robot_local -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Creation of per-user sub-proxies for beapps VO ==&lt;br /&gt;
First of all, you (= the VO admin) need to get a robot certificate that you will register in beapps VO. After that, you have to extract the usercert and the private key in a directory (.globus_pusp) directory and set the correct permissions. Thanks to this [https://ndpfsvn.nikhef.nl/viewvc/mwsec/trunk/lcmaps-plugins-robot/tools/ script], you can create a PUSP for a given user (mdupont) by issuing the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./create_pusp -u mdupont -c ~/.globus_pusp/usercert.pem -k ~/.globus_pusp/userkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now, if you issue the command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-info --all&lt;br /&gt;
&lt;br /&gt;
subject   : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be/CN=user:mdupont&lt;br /&gt;
issuer    : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
identity  : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
type      : RFC3820 compliant impersonation proxy&lt;br /&gt;
strength  : 1024&lt;br /&gt;
path      : /tmp/x509up_u20533&lt;br /&gt;
timeleft  : 23:59:51&lt;br /&gt;
key usage : Digital Signature, Key Encipherment&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you will see that the voms extensions are missing from this proxy. To add the beapps voms extension :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-init --voms beapps --noregen&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The --noregen option is important because you just want to add voms extensions to the pusp proxy that already exists in /tmp/x509up_u20533.&lt;br /&gt;
&lt;br /&gt;
Now, it looks like a good old usual beapps proxy :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-info --all&lt;br /&gt;
&lt;br /&gt;
subject   : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be/CN=user:mdupont/CN=730118287&lt;br /&gt;
issuer    : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be/CN=user:mdupont&lt;br /&gt;
identity  : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
type      : RFC3820 compliant impersonation proxy&lt;br /&gt;
strength  : 1024&lt;br /&gt;
path      : /tmp/x509up_u20533&lt;br /&gt;
timeleft  : 11:59:57&lt;br /&gt;
key usage : Digital Signature, Key Encipherment&lt;br /&gt;
=== VO beapps extension information ===&lt;br /&gt;
VO        : beapps&lt;br /&gt;
subject   : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
issuer    : /DC=org/DC=terena/DC=tcs/C=BE/ST=Brussels/L=Brussels/O=Le reseau telematique belge de la recherche/CN=voms01.begrid.be&lt;br /&gt;
attribute : /beapps/Role=NULL/Capability=NULL&lt;br /&gt;
timeleft  : 11:59:56&lt;br /&gt;
uri       : voms01.begrid.be:18004&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
&lt;br /&gt;
* https://wiki.egi.eu/wiki/Usage_of_the_per_user_sub_proxy_in_EGI&lt;br /&gt;
* https://wiki.egi.eu/wiki/MAN12&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=691</id>
		<title>LToS</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=691"/>
		<updated>2016-05-18T20:25:56Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Creation of per-user sub-proxies for beapps VO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Configuration of the CE ==&lt;br /&gt;
This [https://wiki.egi.eu/wiki/MAN12 link] explains how to set up the PUSP mechanism on the CE. However, if you apply these recipes to the letter, it will break the CE. Here are the actual configurations we have applied :&lt;br /&gt;
* /etc/glexec.conf&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[glexec]&lt;br /&gt;
create_target_proxy=no&lt;br /&gt;
lcas_db_file=/etc/lcas/lcas-glexec.db&lt;br /&gt;
lcas_debug_level=5&lt;br /&gt;
lcas_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcas_log_level=5&lt;br /&gt;
lcmaps_db_file=/etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
lcmaps_debug_level=5&lt;br /&gt;
lcmaps_get_account_policy=combi_mapping&lt;br /&gt;
lcmaps_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcmaps_log_level=5&lt;br /&gt;
lcmaps_voms_verification=no&lt;br /&gt;
linger=no&lt;br /&gt;
log_destination=file&lt;br /&gt;
log_file=/var/log/glexec/glexec.log&lt;br /&gt;
log_level=5&lt;br /&gt;
omission_private_key_white_list=tomcat&lt;br /&gt;
preserve_env_variables=&lt;br /&gt;
silent_logging=no&lt;br /&gt;
use_lcas=no&lt;br /&gt;
user_identity_switch_by=lcmaps&lt;br /&gt;
user_white_list=tomcat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* /etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
path = /usr/lib64/lcmaps&lt;br /&gt;
&lt;br /&gt;
vomspoolaccount = &amp;quot;lcmaps_voms_poolaccount.mod&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
                       &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalgroup = &amp;quot;lcmaps_voms_localgroup.mod&amp;quot;&lt;br /&gt;
                      &amp;quot;-groupmapfile /etc/lcmaps/groupmapfile&amp;quot;&lt;br /&gt;
                      &amp;quot;-mapmin 0 &amp;quot;&lt;br /&gt;
&lt;br /&gt;
proxycheck = &amp;quot;lcmaps_verify_proxy.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-certdir /etc/grid-security/certificates&amp;quot;&lt;br /&gt;
                  &amp;quot;--allow-limited-proxy&amp;quot;&lt;br /&gt;
&lt;br /&gt;
posixenf = &amp;quot;lcmaps_posix_enf.mod&amp;quot;&lt;br /&gt;
                &amp;quot;-maxuid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxpgid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxsgid 32&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalaccount = &amp;quot;lcmaps_voms_localaccount.mod&amp;quot;&lt;br /&gt;
                        &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                        &amp;quot;-use_voms_gid&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_pool = &amp;quot;lcmaps_robot_poolaccount.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapdir /etc/grid-security/gridmapdir/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
poolaccount = &amp;quot;lcmaps_poolaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_ban_dn = &amp;quot;lcmaps_robot_ban_dn.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
localaccount = &amp;quot;lcmaps_localaccount.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ban_dn = &amp;quot;lcmaps_ban_dn.mod&amp;quot;&lt;br /&gt;
              &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_local = &amp;quot;lcmaps_robot_localaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Policies:&lt;br /&gt;
voms:&lt;br /&gt;
proxycheck -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
standard:&lt;br /&gt;
proxycheck -&amp;gt; localaccount&lt;br /&gt;
localaccount -&amp;gt; posixenf | poolaccount&lt;br /&gt;
poolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
combi_mapping:&lt;br /&gt;
ban_dn -&amp;gt; robot_ban_dn&lt;br /&gt;
robot_ban_dn -&amp;gt; proxycheck&lt;br /&gt;
proxycheck -&amp;gt; robot_pool&lt;br /&gt;
~robot_pool -&amp;gt; robot_local&lt;br /&gt;
~robot_local -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Creation of per-user sub-proxies for beapps VO ==&lt;br /&gt;
First of all, you (= the VO admin) need to get a robot certificate that you will register in beapps VO. After that, you have to extract the usercert and the private key in a directory (.globus_pusp) directory and set the correct permissions. Thanks to this [https://ndpfsvn.nikhef.nl/viewvc/mwsec/trunk/lcmaps-plugins-robot/tools/ script], you can create a PUSP for a given user (mdupont) by issuing the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./create_pusp -u mdupont -c ~/.globus_pusp/usercert.pem -k ~/.globus_pusp/userkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now, if you issue the command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-info --all&lt;br /&gt;
&lt;br /&gt;
subject   : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be/CN=user:mdupont&lt;br /&gt;
issuer    : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
identity  : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
type      : RFC3820 compliant impersonation proxy&lt;br /&gt;
strength  : 1024&lt;br /&gt;
path      : /tmp/x509up_u20533&lt;br /&gt;
timeleft  : 23:59:51&lt;br /&gt;
key usage : Digital Signature, Key Encipherment&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you will see that the voms extensions are missing from this proxy. To add the beapps voms extension :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-init --voms beapps --noregen&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The --noregen option is important because you just want to add voms extensions to the pusp proxy that already exists in /tmp/x509up_u20533.&lt;br /&gt;
&lt;br /&gt;
Now, it looks like a good old usual beapps proxy :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-info --all&lt;br /&gt;
&lt;br /&gt;
subject   : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be/CN=user:mdupont/CN=730118287&lt;br /&gt;
issuer    : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be/CN=user:mdupont&lt;br /&gt;
identity  : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
type      : RFC3820 compliant impersonation proxy&lt;br /&gt;
strength  : 1024&lt;br /&gt;
path      : /tmp/x509up_u20533&lt;br /&gt;
timeleft  : 11:59:57&lt;br /&gt;
key usage : Digital Signature, Key Encipherment&lt;br /&gt;
=== VO beapps extension information ===&lt;br /&gt;
VO        : beapps&lt;br /&gt;
subject   : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
issuer    : /DC=org/DC=terena/DC=tcs/C=BE/ST=Brussels/L=Brussels/O=Le reseau telematique belge de la recherche/CN=voms01.begrid.be&lt;br /&gt;
attribute : /beapps/Role=NULL/Capability=NULL&lt;br /&gt;
timeleft  : 11:59:56&lt;br /&gt;
uri       : voms01.begrid.be:18004&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=690</id>
		<title>LToS</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=690"/>
		<updated>2016-05-18T20:23:13Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Creation of per-user sub-proxies for beapps VO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Configuration of the CE ==&lt;br /&gt;
This [https://wiki.egi.eu/wiki/MAN12 link] explains how to set up the PUSP mechanism on the CE. However, if you apply these recipes to the letter, it will break the CE. Here are the actual configurations we have applied :&lt;br /&gt;
* /etc/glexec.conf&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[glexec]&lt;br /&gt;
create_target_proxy=no&lt;br /&gt;
lcas_db_file=/etc/lcas/lcas-glexec.db&lt;br /&gt;
lcas_debug_level=5&lt;br /&gt;
lcas_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcas_log_level=5&lt;br /&gt;
lcmaps_db_file=/etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
lcmaps_debug_level=5&lt;br /&gt;
lcmaps_get_account_policy=combi_mapping&lt;br /&gt;
lcmaps_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcmaps_log_level=5&lt;br /&gt;
lcmaps_voms_verification=no&lt;br /&gt;
linger=no&lt;br /&gt;
log_destination=file&lt;br /&gt;
log_file=/var/log/glexec/glexec.log&lt;br /&gt;
log_level=5&lt;br /&gt;
omission_private_key_white_list=tomcat&lt;br /&gt;
preserve_env_variables=&lt;br /&gt;
silent_logging=no&lt;br /&gt;
use_lcas=no&lt;br /&gt;
user_identity_switch_by=lcmaps&lt;br /&gt;
user_white_list=tomcat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* /etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
path = /usr/lib64/lcmaps&lt;br /&gt;
&lt;br /&gt;
vomspoolaccount = &amp;quot;lcmaps_voms_poolaccount.mod&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
                       &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalgroup = &amp;quot;lcmaps_voms_localgroup.mod&amp;quot;&lt;br /&gt;
                      &amp;quot;-groupmapfile /etc/lcmaps/groupmapfile&amp;quot;&lt;br /&gt;
                      &amp;quot;-mapmin 0 &amp;quot;&lt;br /&gt;
&lt;br /&gt;
proxycheck = &amp;quot;lcmaps_verify_proxy.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-certdir /etc/grid-security/certificates&amp;quot;&lt;br /&gt;
                  &amp;quot;--allow-limited-proxy&amp;quot;&lt;br /&gt;
&lt;br /&gt;
posixenf = &amp;quot;lcmaps_posix_enf.mod&amp;quot;&lt;br /&gt;
                &amp;quot;-maxuid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxpgid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxsgid 32&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalaccount = &amp;quot;lcmaps_voms_localaccount.mod&amp;quot;&lt;br /&gt;
                        &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                        &amp;quot;-use_voms_gid&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_pool = &amp;quot;lcmaps_robot_poolaccount.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapdir /etc/grid-security/gridmapdir/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
poolaccount = &amp;quot;lcmaps_poolaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_ban_dn = &amp;quot;lcmaps_robot_ban_dn.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
localaccount = &amp;quot;lcmaps_localaccount.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ban_dn = &amp;quot;lcmaps_ban_dn.mod&amp;quot;&lt;br /&gt;
              &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_local = &amp;quot;lcmaps_robot_localaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Policies:&lt;br /&gt;
voms:&lt;br /&gt;
proxycheck -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
standard:&lt;br /&gt;
proxycheck -&amp;gt; localaccount&lt;br /&gt;
localaccount -&amp;gt; posixenf | poolaccount&lt;br /&gt;
poolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
combi_mapping:&lt;br /&gt;
ban_dn -&amp;gt; robot_ban_dn&lt;br /&gt;
robot_ban_dn -&amp;gt; proxycheck&lt;br /&gt;
proxycheck -&amp;gt; robot_pool&lt;br /&gt;
~robot_pool -&amp;gt; robot_local&lt;br /&gt;
~robot_local -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Creation of per-user sub-proxies for beapps VO ==&lt;br /&gt;
First of all, you (= the VO admin) need to get a robot certificate that you will register in beapps VO. After that, you have to extract the usercert and the private key in a directory (.globus_pusp) directory and set the correct permissions. Thanks to this [https://ndpfsvn.nikhef.nl/viewvc/mwsec/trunk/lcmaps-plugins-robot/tools/ script], you can create a PUSP for a given user (mdupont) by issuing the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./create_pusp -u mdupont -c ~/.globus_pusp/usercert.pem -k ~/.globus_pusp/userkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now, if you issue the command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-info --all&lt;br /&gt;
&lt;br /&gt;
subject   : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be/CN=user:mdupont&lt;br /&gt;
issuer    : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
identity  : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
type      : RFC3820 compliant impersonation proxy&lt;br /&gt;
strength  : 1024&lt;br /&gt;
path      : /tmp/x509up_u20533&lt;br /&gt;
timeleft  : 23:59:51&lt;br /&gt;
key usage : Digital Signature, Key Encipherment&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you will see that the voms extensions are missing from this proxy. To add the beapps voms extension :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-init --voms beapps --noregen&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The --noregen option is important because you just want to add voms extensions to the pusp proxy that already exists in /tmp/x509up_u20533.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=689</id>
		<title>LToS</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=689"/>
		<updated>2016-05-18T20:22:57Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Creation of per-user sub-proxies for beapps VO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Configuration of the CE ==&lt;br /&gt;
This [https://wiki.egi.eu/wiki/MAN12 link] explains how to set up the PUSP mechanism on the CE. However, if you apply these recipes to the letter, it will break the CE. Here are the actual configurations we have applied :&lt;br /&gt;
* /etc/glexec.conf&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[glexec]&lt;br /&gt;
create_target_proxy=no&lt;br /&gt;
lcas_db_file=/etc/lcas/lcas-glexec.db&lt;br /&gt;
lcas_debug_level=5&lt;br /&gt;
lcas_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcas_log_level=5&lt;br /&gt;
lcmaps_db_file=/etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
lcmaps_debug_level=5&lt;br /&gt;
lcmaps_get_account_policy=combi_mapping&lt;br /&gt;
lcmaps_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcmaps_log_level=5&lt;br /&gt;
lcmaps_voms_verification=no&lt;br /&gt;
linger=no&lt;br /&gt;
log_destination=file&lt;br /&gt;
log_file=/var/log/glexec/glexec.log&lt;br /&gt;
log_level=5&lt;br /&gt;
omission_private_key_white_list=tomcat&lt;br /&gt;
preserve_env_variables=&lt;br /&gt;
silent_logging=no&lt;br /&gt;
use_lcas=no&lt;br /&gt;
user_identity_switch_by=lcmaps&lt;br /&gt;
user_white_list=tomcat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* /etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
path = /usr/lib64/lcmaps&lt;br /&gt;
&lt;br /&gt;
vomspoolaccount = &amp;quot;lcmaps_voms_poolaccount.mod&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
                       &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalgroup = &amp;quot;lcmaps_voms_localgroup.mod&amp;quot;&lt;br /&gt;
                      &amp;quot;-groupmapfile /etc/lcmaps/groupmapfile&amp;quot;&lt;br /&gt;
                      &amp;quot;-mapmin 0 &amp;quot;&lt;br /&gt;
&lt;br /&gt;
proxycheck = &amp;quot;lcmaps_verify_proxy.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-certdir /etc/grid-security/certificates&amp;quot;&lt;br /&gt;
                  &amp;quot;--allow-limited-proxy&amp;quot;&lt;br /&gt;
&lt;br /&gt;
posixenf = &amp;quot;lcmaps_posix_enf.mod&amp;quot;&lt;br /&gt;
                &amp;quot;-maxuid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxpgid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxsgid 32&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalaccount = &amp;quot;lcmaps_voms_localaccount.mod&amp;quot;&lt;br /&gt;
                        &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                        &amp;quot;-use_voms_gid&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_pool = &amp;quot;lcmaps_robot_poolaccount.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapdir /etc/grid-security/gridmapdir/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
poolaccount = &amp;quot;lcmaps_poolaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_ban_dn = &amp;quot;lcmaps_robot_ban_dn.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
localaccount = &amp;quot;lcmaps_localaccount.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ban_dn = &amp;quot;lcmaps_ban_dn.mod&amp;quot;&lt;br /&gt;
              &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_local = &amp;quot;lcmaps_robot_localaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Policies:&lt;br /&gt;
voms:&lt;br /&gt;
proxycheck -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
standard:&lt;br /&gt;
proxycheck -&amp;gt; localaccount&lt;br /&gt;
localaccount -&amp;gt; posixenf | poolaccount&lt;br /&gt;
poolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
combi_mapping:&lt;br /&gt;
ban_dn -&amp;gt; robot_ban_dn&lt;br /&gt;
robot_ban_dn -&amp;gt; proxycheck&lt;br /&gt;
proxycheck -&amp;gt; robot_pool&lt;br /&gt;
~robot_pool -&amp;gt; robot_local&lt;br /&gt;
~robot_local -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Creation of per-user sub-proxies for beapps VO ==&lt;br /&gt;
First of all, you (= the VO admin) need to get a robot certificate that you will register in beapps VO. After that, you have to extract the usercert and the private key in a directory (.globus_pusp) directory and set the correct permissions. Thanks to this [https://ndpfsvn.nikhef.nl/viewvc/mwsec/trunk/lcmaps-plugins-robot/tools/ script], you can create a PUSP for a given user (mdupont) by issuing the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./create_pusp -u mdupont -c ~/.globus_pusp/usercert.pem -k ~/.globus_pusp/userkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now, if you issue the command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-info --all&lt;br /&gt;
subject   : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be/CN=user:mdupont&lt;br /&gt;
issuer    : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
identity  : /DC=org/DC=terena/DC=tcs/C=BE/O=Vrije Universiteit Brussel/CN=Robot - STEPHANE GERARD stgerard@vub.ac.be&lt;br /&gt;
type      : RFC3820 compliant impersonation proxy&lt;br /&gt;
strength  : 1024&lt;br /&gt;
path      : /tmp/x509up_u20533&lt;br /&gt;
timeleft  : 23:59:51&lt;br /&gt;
key usage : Digital Signature, Key Encipherment&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you will see that the voms extensions are missing from this proxy. To add the beapps voms extension :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-init --voms beapps --noregen&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The --noregen option is important because you just want to add voms extensions to the pusp proxy that already exists in /tmp/x509up_u20533.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=688</id>
		<title>LToS</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=688"/>
		<updated>2016-05-18T20:22:12Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Creation of per-user sub-proxies for beapps VO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Configuration of the CE ==&lt;br /&gt;
This [https://wiki.egi.eu/wiki/MAN12 link] explains how to set up the PUSP mechanism on the CE. However, if you apply these recipes to the letter, it will break the CE. Here are the actual configurations we have applied :&lt;br /&gt;
* /etc/glexec.conf&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[glexec]&lt;br /&gt;
create_target_proxy=no&lt;br /&gt;
lcas_db_file=/etc/lcas/lcas-glexec.db&lt;br /&gt;
lcas_debug_level=5&lt;br /&gt;
lcas_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcas_log_level=5&lt;br /&gt;
lcmaps_db_file=/etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
lcmaps_debug_level=5&lt;br /&gt;
lcmaps_get_account_policy=combi_mapping&lt;br /&gt;
lcmaps_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcmaps_log_level=5&lt;br /&gt;
lcmaps_voms_verification=no&lt;br /&gt;
linger=no&lt;br /&gt;
log_destination=file&lt;br /&gt;
log_file=/var/log/glexec/glexec.log&lt;br /&gt;
log_level=5&lt;br /&gt;
omission_private_key_white_list=tomcat&lt;br /&gt;
preserve_env_variables=&lt;br /&gt;
silent_logging=no&lt;br /&gt;
use_lcas=no&lt;br /&gt;
user_identity_switch_by=lcmaps&lt;br /&gt;
user_white_list=tomcat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* /etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
path = /usr/lib64/lcmaps&lt;br /&gt;
&lt;br /&gt;
vomspoolaccount = &amp;quot;lcmaps_voms_poolaccount.mod&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
                       &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalgroup = &amp;quot;lcmaps_voms_localgroup.mod&amp;quot;&lt;br /&gt;
                      &amp;quot;-groupmapfile /etc/lcmaps/groupmapfile&amp;quot;&lt;br /&gt;
                      &amp;quot;-mapmin 0 &amp;quot;&lt;br /&gt;
&lt;br /&gt;
proxycheck = &amp;quot;lcmaps_verify_proxy.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-certdir /etc/grid-security/certificates&amp;quot;&lt;br /&gt;
                  &amp;quot;--allow-limited-proxy&amp;quot;&lt;br /&gt;
&lt;br /&gt;
posixenf = &amp;quot;lcmaps_posix_enf.mod&amp;quot;&lt;br /&gt;
                &amp;quot;-maxuid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxpgid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxsgid 32&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalaccount = &amp;quot;lcmaps_voms_localaccount.mod&amp;quot;&lt;br /&gt;
                        &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                        &amp;quot;-use_voms_gid&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_pool = &amp;quot;lcmaps_robot_poolaccount.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapdir /etc/grid-security/gridmapdir/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
poolaccount = &amp;quot;lcmaps_poolaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_ban_dn = &amp;quot;lcmaps_robot_ban_dn.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
localaccount = &amp;quot;lcmaps_localaccount.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ban_dn = &amp;quot;lcmaps_ban_dn.mod&amp;quot;&lt;br /&gt;
              &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_local = &amp;quot;lcmaps_robot_localaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Policies:&lt;br /&gt;
voms:&lt;br /&gt;
proxycheck -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
standard:&lt;br /&gt;
proxycheck -&amp;gt; localaccount&lt;br /&gt;
localaccount -&amp;gt; posixenf | poolaccount&lt;br /&gt;
poolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
combi_mapping:&lt;br /&gt;
ban_dn -&amp;gt; robot_ban_dn&lt;br /&gt;
robot_ban_dn -&amp;gt; proxycheck&lt;br /&gt;
proxycheck -&amp;gt; robot_pool&lt;br /&gt;
~robot_pool -&amp;gt; robot_local&lt;br /&gt;
~robot_local -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Creation of per-user sub-proxies for beapps VO ==&lt;br /&gt;
First of all, you (= the VO admin) need to get a robot certificate that you will register in beapps VO. After that, you have to extract the usercert and the private key in a directory (.globus_pusp) directory and set the correct permissions. Thanks to this [https://ndpfsvn.nikhef.nl/viewvc/mwsec/trunk/lcmaps-plugins-robot/tools/ script], you can create a PUSP for a given user (mdupont) by issuing the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./create_pusp -u mdupont -c ~/.globus_pusp/usercert.pem -k ~/.globus_pusp/userkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now, if you issue the command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-info --all&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you will see that the voms extensions are missing from this proxy. To add the beapps voms extension :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
voms-proxy-init --voms beapps --noregen&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The --noregen option is important because you just want to add voms extensions to the pusp proxy that already exists in /tmp/x509up_u20533.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=687</id>
		<title>LToS</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=687"/>
		<updated>2016-05-18T20:15:48Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Configuration of the CE ==&lt;br /&gt;
This [https://wiki.egi.eu/wiki/MAN12 link] explains how to set up the PUSP mechanism on the CE. However, if you apply these recipes to the letter, it will break the CE. Here are the actual configurations we have applied :&lt;br /&gt;
* /etc/glexec.conf&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[glexec]&lt;br /&gt;
create_target_proxy=no&lt;br /&gt;
lcas_db_file=/etc/lcas/lcas-glexec.db&lt;br /&gt;
lcas_debug_level=5&lt;br /&gt;
lcas_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcas_log_level=5&lt;br /&gt;
lcmaps_db_file=/etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
lcmaps_debug_level=5&lt;br /&gt;
lcmaps_get_account_policy=combi_mapping&lt;br /&gt;
lcmaps_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcmaps_log_level=5&lt;br /&gt;
lcmaps_voms_verification=no&lt;br /&gt;
linger=no&lt;br /&gt;
log_destination=file&lt;br /&gt;
log_file=/var/log/glexec/glexec.log&lt;br /&gt;
log_level=5&lt;br /&gt;
omission_private_key_white_list=tomcat&lt;br /&gt;
preserve_env_variables=&lt;br /&gt;
silent_logging=no&lt;br /&gt;
use_lcas=no&lt;br /&gt;
user_identity_switch_by=lcmaps&lt;br /&gt;
user_white_list=tomcat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* /etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
path = /usr/lib64/lcmaps&lt;br /&gt;
&lt;br /&gt;
vomspoolaccount = &amp;quot;lcmaps_voms_poolaccount.mod&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
                       &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalgroup = &amp;quot;lcmaps_voms_localgroup.mod&amp;quot;&lt;br /&gt;
                      &amp;quot;-groupmapfile /etc/lcmaps/groupmapfile&amp;quot;&lt;br /&gt;
                      &amp;quot;-mapmin 0 &amp;quot;&lt;br /&gt;
&lt;br /&gt;
proxycheck = &amp;quot;lcmaps_verify_proxy.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-certdir /etc/grid-security/certificates&amp;quot;&lt;br /&gt;
                  &amp;quot;--allow-limited-proxy&amp;quot;&lt;br /&gt;
&lt;br /&gt;
posixenf = &amp;quot;lcmaps_posix_enf.mod&amp;quot;&lt;br /&gt;
                &amp;quot;-maxuid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxpgid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxsgid 32&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalaccount = &amp;quot;lcmaps_voms_localaccount.mod&amp;quot;&lt;br /&gt;
                        &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                        &amp;quot;-use_voms_gid&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_pool = &amp;quot;lcmaps_robot_poolaccount.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapdir /etc/grid-security/gridmapdir/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
poolaccount = &amp;quot;lcmaps_poolaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_ban_dn = &amp;quot;lcmaps_robot_ban_dn.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
localaccount = &amp;quot;lcmaps_localaccount.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ban_dn = &amp;quot;lcmaps_ban_dn.mod&amp;quot;&lt;br /&gt;
              &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_local = &amp;quot;lcmaps_robot_localaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Policies:&lt;br /&gt;
voms:&lt;br /&gt;
proxycheck -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
standard:&lt;br /&gt;
proxycheck -&amp;gt; localaccount&lt;br /&gt;
localaccount -&amp;gt; posixenf | poolaccount&lt;br /&gt;
poolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
combi_mapping:&lt;br /&gt;
ban_dn -&amp;gt; robot_ban_dn&lt;br /&gt;
robot_ban_dn -&amp;gt; proxycheck&lt;br /&gt;
proxycheck -&amp;gt; robot_pool&lt;br /&gt;
~robot_pool -&amp;gt; robot_local&lt;br /&gt;
~robot_local -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Creation of per-user sub-proxies for beapps VO ==&lt;br /&gt;
First of all, you (= the VO admin) need to get a robot certificate that you will register in beapps VO. After that, you have to extract the usercert and the private key in a directory (.globus_pusp) directory and set the correct permissions. Thanks to this [script https://ndpfsvn.nikhef.nl/viewvc/mwsec/trunk/lcmaps-plugins-robot/tools/], you can create a PUSP for a given user (mdupont) by issuing the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./create_pusp -u mdupont -c ~/.globus_pusp/usercert.pem -k ~/.globus_pusp/userkey.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=686</id>
		<title>LToS</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=686"/>
		<updated>2016-05-18T19:52:43Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Configuration of the CE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Configuration of the CE ==&lt;br /&gt;
This [https://wiki.egi.eu/wiki/MAN12 link] explains how to set up the PUSP mechanism on the CE. However, if you apply these recipes to the letter, it will break the CE. Here are the actual configurations we have applied :&lt;br /&gt;
* /etc/glexec.conf&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[glexec]&lt;br /&gt;
create_target_proxy=no&lt;br /&gt;
lcas_db_file=/etc/lcas/lcas-glexec.db&lt;br /&gt;
lcas_debug_level=5&lt;br /&gt;
lcas_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcas_log_level=5&lt;br /&gt;
lcmaps_db_file=/etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
lcmaps_debug_level=5&lt;br /&gt;
lcmaps_get_account_policy=combi_mapping&lt;br /&gt;
lcmaps_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcmaps_log_level=5&lt;br /&gt;
lcmaps_voms_verification=no&lt;br /&gt;
linger=no&lt;br /&gt;
log_destination=file&lt;br /&gt;
log_file=/var/log/glexec/glexec.log&lt;br /&gt;
log_level=5&lt;br /&gt;
omission_private_key_white_list=tomcat&lt;br /&gt;
preserve_env_variables=&lt;br /&gt;
silent_logging=no&lt;br /&gt;
use_lcas=no&lt;br /&gt;
user_identity_switch_by=lcmaps&lt;br /&gt;
user_white_list=tomcat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* /etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
path = /usr/lib64/lcmaps&lt;br /&gt;
&lt;br /&gt;
vomspoolaccount = &amp;quot;lcmaps_voms_poolaccount.mod&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                       &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
                       &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalgroup = &amp;quot;lcmaps_voms_localgroup.mod&amp;quot;&lt;br /&gt;
                      &amp;quot;-groupmapfile /etc/lcmaps/groupmapfile&amp;quot;&lt;br /&gt;
                      &amp;quot;-mapmin 0 &amp;quot;&lt;br /&gt;
&lt;br /&gt;
proxycheck = &amp;quot;lcmaps_verify_proxy.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-certdir /etc/grid-security/certificates&amp;quot;&lt;br /&gt;
                  &amp;quot;--allow-limited-proxy&amp;quot;&lt;br /&gt;
&lt;br /&gt;
posixenf = &amp;quot;lcmaps_posix_enf.mod&amp;quot;&lt;br /&gt;
                &amp;quot;-maxuid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxpgid 1&amp;quot;&lt;br /&gt;
                &amp;quot;-maxsgid 32&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vomslocalaccount = &amp;quot;lcmaps_voms_localaccount.mod&amp;quot;&lt;br /&gt;
                        &amp;quot;-gridmapfile /etc/lcmaps/gridmapfile&amp;quot;&lt;br /&gt;
                        &amp;quot;-use_voms_gid&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_pool = &amp;quot;lcmaps_robot_poolaccount.mod&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                  &amp;quot;-gridmapdir /etc/grid-security/gridmapdir/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
poolaccount = &amp;quot;lcmaps_poolaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-override_inconsistency&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapdir /etc/grid-security/gridmapdir&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_ban_dn = &amp;quot;lcmaps_robot_ban_dn.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
localaccount = &amp;quot;lcmaps_localaccount.mod&amp;quot;&lt;br /&gt;
                    &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ban_dn = &amp;quot;lcmaps_ban_dn.mod&amp;quot;&lt;br /&gt;
              &amp;quot;-banmapfile /etc/lcas/ban_users.db&amp;quot;&lt;br /&gt;
&lt;br /&gt;
robot_local = &amp;quot;lcmaps_robot_localaccount.mod&amp;quot;&lt;br /&gt;
                   &amp;quot;-gridmapfile /etc/grid-security/grid-mapfile&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Policies:&lt;br /&gt;
voms:&lt;br /&gt;
proxycheck -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
standard:&lt;br /&gt;
proxycheck -&amp;gt; localaccount&lt;br /&gt;
localaccount -&amp;gt; posixenf | poolaccount&lt;br /&gt;
poolaccount -&amp;gt; posixenf&lt;br /&gt;
&lt;br /&gt;
combi_mapping:&lt;br /&gt;
ban_dn -&amp;gt; robot_ban_dn&lt;br /&gt;
robot_ban_dn -&amp;gt; proxycheck&lt;br /&gt;
proxycheck -&amp;gt; robot_pool&lt;br /&gt;
~robot_pool -&amp;gt; robot_local&lt;br /&gt;
~robot_local -&amp;gt; vomslocalgroup&lt;br /&gt;
vomslocalgroup -&amp;gt; vomslocalaccount&lt;br /&gt;
vomslocalaccount -&amp;gt; posixenf | vomspoolaccount&lt;br /&gt;
vomspoolaccount -&amp;gt; posixenf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=685</id>
		<title>LToS</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=685"/>
		<updated>2016-05-18T19:45:06Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Configuration of the CE ==&lt;br /&gt;
This [https://wiki.egi.eu/wiki/MAN12 link] explains how to set up the PUSP mechanism on the CE. However, if you apply these recipes to the letter, it will break the CE. Here are the real configurations we have applied :&lt;br /&gt;
* /etc/glexec.conf&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[glexec]&lt;br /&gt;
create_target_proxy=no&lt;br /&gt;
lcas_db_file=/etc/lcas/lcas-glexec.db&lt;br /&gt;
lcas_debug_level=5&lt;br /&gt;
lcas_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcas_log_level=5&lt;br /&gt;
lcmaps_db_file=/etc/lcmaps/lcmaps.db.glexec.pusp&lt;br /&gt;
lcmaps_debug_level=5&lt;br /&gt;
lcmaps_get_account_policy=combi_mapping&lt;br /&gt;
lcmaps_log_file=/var/log/glexec/lcas_lcmaps.log&lt;br /&gt;
lcmaps_log_level=5&lt;br /&gt;
lcmaps_voms_verification=no&lt;br /&gt;
linger=no&lt;br /&gt;
log_destination=file&lt;br /&gt;
log_file=/var/log/glexec/glexec.log&lt;br /&gt;
log_level=5&lt;br /&gt;
omission_private_key_white_list=tomcat&lt;br /&gt;
preserve_env_variables=&lt;br /&gt;
silent_logging=no&lt;br /&gt;
use_lcas=no&lt;br /&gt;
user_identity_switch_by=lcmaps&lt;br /&gt;
user_white_list=tomcat&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=683</id>
		<title>LToS</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=683"/>
		<updated>2016-05-18T14:50:41Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Configuration of the CE ==&lt;br /&gt;
This [https://wiki.egi.eu/wiki/MAN12 link] explains how to set up the PUSP mechanism on the CE. However, if you apply these recipes to the letter, it will break the CE. Here are the real configurations we have applied :&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=682</id>
		<title>LToS</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=682"/>
		<updated>2016-05-18T14:48:07Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Configuration of the CE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Configuration of the CE ==&lt;br /&gt;
This [https://wiki.egi.eu/wiki/MAN12 link] explains how to set up the PUSP mechanism on the CE.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=681</id>
		<title>LToS</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=LToS&amp;diff=681"/>
		<updated>2016-05-18T14:47:48Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: Created page with &amp;quot;== Configuration of the CE == This [link https://wiki.egi.eu/wiki/MAN12] explains how to set up the PUSP mechanism on the CE.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Configuration of the CE ==&lt;br /&gt;
This [link https://wiki.egi.eu/wiki/MAN12] explains how to set up the PUSP mechanism on the CE.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=680</id>
		<title>AdminPage</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=680"/>
		<updated>2016-05-18T14:45:58Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Page for Administrators}}&lt;br /&gt;
==== Management of the whole cluster ====&lt;br /&gt;
*[[elog]]&lt;br /&gt;
*[[ShutDownCluster| How to properly switch off the cluster]]&lt;br /&gt;
*[[PutClusterOn| How to properly put the cluster on]]&lt;br /&gt;
==== CMS Services ====&lt;br /&gt;
*[[Phedex]]&lt;br /&gt;
*[[Heartbeat]]&lt;br /&gt;
*[[LoadTest]]&lt;br /&gt;
*[[FroNTier]]&lt;br /&gt;
*[[ProdAgent]]&lt;br /&gt;
*[[GitForSiteConf| instructions to commit siteconf to git]]&lt;br /&gt;
==== Grid Configuration Issues ====&lt;br /&gt;
*[[UpdateCertificates| Update the certificates of all our machines]]&lt;br /&gt;
*[[CreamIssues| Issues with cream and how to solve them]]&lt;br /&gt;
*[[PBS_TMPDIR| PBS TMPDIR]]&lt;br /&gt;
*[[APEL| &amp;lt;strike&amp;gt;APEL&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[BDII]]&lt;br /&gt;
*[[FTS]]&lt;br /&gt;
*[[SL4_x86_64_WNs| &amp;lt;strike&amp;gt;SL4 x86_64 WNs&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[CE_oveloaded| CE overloaded]]&lt;br /&gt;
*[[RB]]&lt;br /&gt;
*[[IPMI]]&lt;br /&gt;
*[[CA_certificates| &amp;lt;strike&amp;gt;Upgrade CA certificates&amp;lt;/strike&amp;gt; (OBSOLETE)]]&lt;br /&gt;
*[[Shutdown| Shutting down the cluster]]&lt;br /&gt;
*[[Software_Area_Switch| Software Area Switch]]&lt;br /&gt;
*[[KernelUpdate| Kernel mandatory updates for critical vulnerabilities]]&lt;br /&gt;
*[[Argus| Argus server and glexec on the workernodes]]&lt;br /&gt;
*[[ApelGapPublishing| Apel gap publishing]]&lt;br /&gt;
*[[UpdateCACertificates| Update IGTF CA certificates]]&lt;br /&gt;
&lt;br /&gt;
==== Files section ====&lt;br /&gt;
*[[DCache| dCache]]&lt;br /&gt;
**[[DeleteObsoleteFiles| Find obsolete files]]&lt;br /&gt;
**[[DCacheAdminMode| dCache admin mode]]&lt;br /&gt;
**[[FindSizeOnDcache| find size of a directory on dcache]]&lt;br /&gt;
**[[DcachePoolConfig1912| dCache Pool Postinstallation steps]]&lt;br /&gt;
**[[DCacheMaxMovers| Adapt max mover]]&lt;br /&gt;
**[[pnfsScripts| scripts to see on what pools files in a directory reside and to move them to other pools]]&lt;br /&gt;
*[[OlPnfsFiles| Procedure for removal of old user files on pnfs]]&lt;br /&gt;
*[[GetLostFiles| Retrieve lost files from datasets]]&lt;br /&gt;
*[[StorageConsistency| Storage Consistency]]&lt;br /&gt;
&lt;br /&gt;
==== Status and Monitoring ====&lt;br /&gt;
*[[ReservedWNs| List of reserved WNs]]&lt;br /&gt;
*[[Todo| Todo-list]]&lt;br /&gt;
*[[Monitoring]]&lt;br /&gt;
*[[Plans-Schedule| Plans/Schedule]]&lt;br /&gt;
*[[Grid_Troubleshooting_link| Grid Troubleshooting link]]&lt;br /&gt;
*[[Incident_reports| Incident Reports]]&lt;br /&gt;
*[[Dissapeared_software| How to put the software back]]&lt;br /&gt;
*[[Bad_WN| What to do when a WN sends a &amp;quot;bad_wn.pl&amp;quot; email to grid_admin ?]]&lt;br /&gt;
*[[Nagios_installation| Nagios Installation at IIHE]]&lt;br /&gt;
*[[Restart_DCache| How to restart DCache ]]&lt;br /&gt;
&lt;br /&gt;
==== Info ====&lt;br /&gt;
*[[General_info| General info]]&lt;br /&gt;
*[[Installing_CMSSW| Installing CMSSW]]&lt;br /&gt;
*[[Installing_CRAB| Installing CRAB]]&lt;br /&gt;
*[[System_Benchmarks| System Benchmarks]]&lt;br /&gt;
*[[T2BTrac| T2B Trac config info]]&lt;br /&gt;
*[[HardWare| Hardware information]]&lt;br /&gt;
*[[NetworkSetup| Network Setup]]&lt;br /&gt;
*[[SetupMonitoringControlerSunfireV20z| Setup Monitoring of LSI Disk Controler on Sunfire V20z Server]]&lt;br /&gt;
*[[LDAP_UCL_IIHE| &amp;lt;strike&amp;gt;LDAP authentication system for the replication between UCL and IIHE sites&amp;lt;/strike&amp;gt; (OBSOLETE)]]&lt;br /&gt;
*[[GridAdminSurvivalGuide| IIHE Grid-admin survival guide]]&lt;br /&gt;
*[[Solaris| Solaris 10]]&lt;br /&gt;
*[[SolarisSSD| Adding an SSD card and configuring RAID, zpools, filesystems and shares on the new Solaris fileserver]]&lt;br /&gt;
*[[LinuxAdminTricks| Linux tricks for admins]]&lt;br /&gt;
*[[CrabLocalPbsSubmission| &amp;lt;strike&amp;gt;How to implement local PBS submission with CRAB ?&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[AddNewUserFromUCLToLDAP| &amp;lt;strike&amp;gt;How to create an account for a CMS user from UCL ?&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[OSErrata| &amp;lt;strike&amp;gt;Deploying OS errata&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[BenchmarkHEPSPEC06| Howto benchmark a node with HEPSPEC06]]&lt;br /&gt;
*[[Installing_dcache_pool| Install a new dCache pool]]&lt;br /&gt;
*[[BackupUsersHomeDirs| Backup of the users home dirs on Jefke]]&lt;br /&gt;
*[[MonWebServicesMigration| Migration of mon and its Web services]]&lt;br /&gt;
*[[HOWTORestartNagiosTest| HOWTO restart a nagios test manually]]&lt;br /&gt;
*[[CompileAndInstallRoot| Compile and install ROOT]]&lt;br /&gt;
*[[CleanCreamdb| Clean creamdb]]&lt;br /&gt;
*Reboot campaign for the workernodes :&lt;br /&gt;
**[[KernelUpdate| Reboot after a kernel update]]&lt;br /&gt;
**[[UpgradeWNstoSL5.5| Reboot after an OS upgrade]]&lt;br /&gt;
*[[ManageAllAdminScriptsWithGit| Central management of all the admin scripts with Git]]&lt;br /&gt;
*[[ConfigProxyCvmfs| Configuration of a proxy for CVMFS]]&lt;br /&gt;
**[[RecoverCvmfs| How to recover CVMFS]]&lt;br /&gt;
*[[TestNFSPerformance| How to test NFS Performance]]&lt;br /&gt;
*[[TetexNotAvailableInSL6| Alternatives to Tetex]]&lt;br /&gt;
*[[NewMethodUpdateKernelWorkernodes| A new easy method to update kernel on the workernodes]]&lt;br /&gt;
*[[AutomaticMailSendingFromCluster| About automatic mail sending from the cluster]]&lt;br /&gt;
*[[T2BTracAccess| T2B Trac access configuration]]&lt;br /&gt;
*[[WorkingWithRHEL7| Surviving to RHEL7]]&lt;br /&gt;
*[[CCMWithKerberos| Experimental : Securing profiles with Kerberos]]&lt;br /&gt;
*[[MigrateToMediaWiki| Migration of T2B Wiki from Trac to MediaWiki]]&lt;br /&gt;
*[[motd|Message Of The Day (motd)]]&lt;br /&gt;
*[[LToS| Support of Long-tail of Science]]&lt;br /&gt;
&lt;br /&gt;
==== Quattor ====&lt;br /&gt;
*[http://quattor.begrid.be/trac/centralised-begrid-v5/wiki/BEgridAndQuattor &amp;lt;strike&amp;gt;BEgrid wiki&amp;lt;/strike&amp;gt;(OBSOLETE)]&lt;br /&gt;
*[[Test_things| &amp;lt;strike&amp;gt;Test things&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[Lemon_installation| &amp;lt;strike&amp;gt;Lemon installation&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[QuattorPointers| &amp;lt;strike&amp;gt;Pointers&amp;lt;/strike&amp;gt;]]&amp;lt;strike&amp;gt; to more in-depth information on quattor&amp;lt;/strike&amp;gt;(OBSOLETE)&lt;br /&gt;
*[[AddingMachineToCluster| &amp;lt;strike&amp;gt;Adding&amp;lt;/strike&amp;gt;]]&amp;lt;strike&amp;gt; a new machine to the cluster&amp;lt;/strike&amp;gt;(OBSOLETE)&lt;br /&gt;
*[[AutomaticMachineTemplateGeneration| Automatic generation of hardware and profile templates for new workernodes]]&lt;br /&gt;
*[[InstallationBEgridClient0| Installation of a Quattor deployment server release 13.1]]&lt;br /&gt;
*[[InstallFilesNewOS| How to add a new OS to the Quattor Repository]]&lt;br /&gt;
*[[GenerateRPMFromATagInGithub| How to build an RPM from a tag in Github]]&lt;br /&gt;
*[[HowtoMigrateWNToCB9| How to migrate workernodes from CB8 to CB9]]&lt;br /&gt;
*[[WorkingInCB9| Working in CB9 (Quattor release &amp;gt;= 14.2)]]&lt;br /&gt;
*[[AideMemoire| FAQ - Aide-mémoire - Howtos]]&lt;br /&gt;
*[[BuildANewPysvnOnAiiServer| Howto build a new pysvn on a SL63 AII server]]&lt;br /&gt;
*[[QuattorFreeIPA| Quattor and FreeIPA]]&lt;br /&gt;
*[[NewRuncheck| Rewrite of the runcheck script in Perl]]&lt;br /&gt;
*[[HardDisksManagement| Hard disks management]]&lt;br /&gt;
*[[Aquilon| Aquilon]]&lt;br /&gt;
&lt;br /&gt;
==== KVM virtualization ====&lt;br /&gt;
*[[VirtWithKVM| Virtualization of the new CREAM-CE on dom02 with KVM]]&lt;br /&gt;
*[[VirtWithKVM1| Installation of the new virtualization server dom04]]&lt;br /&gt;
*[[CreateVM| Easy creation of virtual machines]]&lt;br /&gt;
*[[MonitoringvHostswithGanglia| Monitoring the KVM vHosts with Ganglia]]&lt;br /&gt;
&lt;br /&gt;
==== T2B Cloud ====&lt;br /&gt;
*[[MigrationToOpenNebula| Transforming the KVM hypervisors farm into an OpenNebula cloud]]&lt;br /&gt;
*[[WorkingInT2BCloud| Working in the T2B cloud]]&lt;br /&gt;
&lt;br /&gt;
==== gUSE/WS-PGRADE portal ====&lt;br /&gt;
*[[PortalInstall| Portal installation]]&lt;br /&gt;
*[[PortalConfig| Portal configuration]]&lt;br /&gt;
*[[PortalOperations| Portal operations]]&lt;br /&gt;
&lt;br /&gt;
==== Migration to EMI-3 ====&lt;br /&gt;
*[[MigrateBEgridToEMI3_part1| BEgrid facilities - Part 1]]&lt;br /&gt;
*[[MigrateBEgridToEMI3_part2| BEgrid facilities - Part 2]]&lt;br /&gt;
&lt;br /&gt;
==== XEN ====&lt;br /&gt;
*[[Manage_XEN| Manage XEN]]&lt;br /&gt;
*[[XenQuattor| Xen and Quattor]]&lt;br /&gt;
&lt;br /&gt;
==== CEPH ====&lt;br /&gt;
*[[UnderstandingCeph| Understanding Ceph]]&lt;br /&gt;
*[[InstallCephWithQuattor| Installing Ceph with Quattor]]&lt;br /&gt;
*[[ExperimentsWithCeph| Experiments with Ceph]]&lt;br /&gt;
*[[CephBasics| Operating a Ceph cluster]]&lt;br /&gt;
&lt;br /&gt;
==== Logstash / Elasticsearch / Kibana (ELK) ====&lt;br /&gt;
machine: log10 | [http://log10.iihe.ac.be/index.html interface]  |  [http://log10.iihe.ac.be/HQ index manager]&lt;br /&gt;
* [[log_forwarding_with_quattor|Forwarding a log with rsyslog to logstash using quattor]]&lt;br /&gt;
* [[log_parsing_with_logstash|Parsing the logs with logstash]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Network ====&lt;br /&gt;
* [[network_bond_and_tag|Bonding of 2 interfaces + tagging of 2 vlans on the bond (PRIV+PUB)|]]&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=679</id>
		<title>AdminPage</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=679"/>
		<updated>2016-05-18T14:45:22Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Page for Administrators}}&lt;br /&gt;
==== Management of the whole cluster ====&lt;br /&gt;
*[[elog]]&lt;br /&gt;
*[[ShutDownCluster| How to properly switch off the cluster]]&lt;br /&gt;
*[[PutClusterOn| How to properly put the cluster on]]&lt;br /&gt;
==== CMS Services ====&lt;br /&gt;
*[[Phedex]]&lt;br /&gt;
*[[Heartbeat]]&lt;br /&gt;
*[[LoadTest]]&lt;br /&gt;
*[[FroNTier]]&lt;br /&gt;
*[[ProdAgent]]&lt;br /&gt;
*[[GitForSiteConf| instructions to commit siteconf to git]]&lt;br /&gt;
==== Grid Configuration Issues ====&lt;br /&gt;
*[[UpdateCertificates| Update the certificates of all our machines]]&lt;br /&gt;
*[[CreamIssues| Issues with cream and how to solve them]]&lt;br /&gt;
*[[PBS_TMPDIR| PBS TMPDIR]]&lt;br /&gt;
*[[APEL| &amp;lt;strike&amp;gt;APEL&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[BDII]]&lt;br /&gt;
*[[FTS]]&lt;br /&gt;
*[[SL4_x86_64_WNs| &amp;lt;strike&amp;gt;SL4 x86_64 WNs&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[CE_oveloaded| CE overloaded]]&lt;br /&gt;
*[[RB]]&lt;br /&gt;
*[[IPMI]]&lt;br /&gt;
*[[CA_certificates| &amp;lt;strike&amp;gt;Upgrade CA certificates&amp;lt;/strike&amp;gt; (OBSOLETE)]]&lt;br /&gt;
*[[Shutdown| Shutting down the cluster]]&lt;br /&gt;
*[[Software_Area_Switch| Software Area Switch]]&lt;br /&gt;
*[[KernelUpdate| Kernel mandatory updates for critical vulnerabilities]]&lt;br /&gt;
*[[Argus| Argus server and glexec on the workernodes]]&lt;br /&gt;
*[[ApelGapPublishing| Apel gap publishing]]&lt;br /&gt;
*[[UpdateCACertificates| Update IGTF CA certificates]]&lt;br /&gt;
&lt;br /&gt;
==== Files section ====&lt;br /&gt;
*[[DCache| dCache]]&lt;br /&gt;
**[[DeleteObsoleteFiles| Find obsolete files]]&lt;br /&gt;
**[[DCacheAdminMode| dCache admin mode]]&lt;br /&gt;
**[[FindSizeOnDcache| find size of a directory on dcache]]&lt;br /&gt;
**[[DcachePoolConfig1912| dCache Pool Postinstallation steps]]&lt;br /&gt;
**[[DCacheMaxMovers| Adapt max mover]]&lt;br /&gt;
**[[pnfsScripts| scripts to see on what pools files in a directory reside and to move them to other pools]]&lt;br /&gt;
*[[OlPnfsFiles| Procedure for removal of old user files on pnfs]]&lt;br /&gt;
*[[GetLostFiles| Retrieve lost files from datasets]]&lt;br /&gt;
*[[StorageConsistency| Storage Consistency]]&lt;br /&gt;
&lt;br /&gt;
==== Status and Monitoring ====&lt;br /&gt;
*[[ReservedWNs| List of reserved WNs]]&lt;br /&gt;
*[[Todo| Todo-list]]&lt;br /&gt;
*[[Monitoring]]&lt;br /&gt;
*[[Plans-Schedule| Plans/Schedule]]&lt;br /&gt;
*[[Grid_Troubleshooting_link| Grid Troubleshooting link]]&lt;br /&gt;
*[[Incident_reports| Incident Reports]]&lt;br /&gt;
*[[Dissapeared_software| How to put the software back]]&lt;br /&gt;
*[[Bad_WN| What to do when a WN sends a &amp;quot;bad_wn.pl&amp;quot; email to grid_admin ?]]&lt;br /&gt;
*[[Nagios_installation| Nagios Installation at IIHE]]&lt;br /&gt;
*[[Restart_DCache| How to restart DCache ]]&lt;br /&gt;
&lt;br /&gt;
==== Info ====&lt;br /&gt;
*[[General_info| General info]]&lt;br /&gt;
*[[Installing_CMSSW| Installing CMSSW]]&lt;br /&gt;
*[[Installing_CRAB| Installing CRAB]]&lt;br /&gt;
*[[System_Benchmarks| System Benchmarks]]&lt;br /&gt;
*[[T2BTrac| T2B Trac config info]]&lt;br /&gt;
*[[HardWare| Hardware information]]&lt;br /&gt;
*[[NetworkSetup| Network Setup]]&lt;br /&gt;
*[[SetupMonitoringControlerSunfireV20z| Setup Monitoring of LSI Disk Controler on Sunfire V20z Server]]&lt;br /&gt;
*[[LDAP_UCL_IIHE| &amp;lt;strike&amp;gt;LDAP authentication system for the replication between UCL and IIHE sites&amp;lt;/strike&amp;gt; (OBSOLETE)]]&lt;br /&gt;
*[[GridAdminSurvivalGuide| IIHE Grid-admin survival guide]]&lt;br /&gt;
*[[Solaris| Solaris 10]]&lt;br /&gt;
*[[SolarisSSD| Adding an SSD card and configuring RAID, zpools, filesystems and shares on the new Solaris fileserver]]&lt;br /&gt;
*[[LinuxAdminTricks| Linux tricks for admins]]&lt;br /&gt;
*[[CrabLocalPbsSubmission| &amp;lt;strike&amp;gt;How to implement local PBS submission with CRAB ?&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[AddNewUserFromUCLToLDAP| &amp;lt;strike&amp;gt;How to create an account for a CMS user from UCL ?&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[OSErrata| &amp;lt;strike&amp;gt;Deploying OS errata&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[BenchmarkHEPSPEC06| Howto benchmark a node with HEPSPEC06]]&lt;br /&gt;
*[[Installing_dcache_pool| Install a new dCache pool]]&lt;br /&gt;
*[[BackupUsersHomeDirs| Backup of the users home dirs on Jefke]]&lt;br /&gt;
*[[MonWebServicesMigration| Migration of mon and its Web services]]&lt;br /&gt;
*[[HOWTORestartNagiosTest| HOWTO restart a nagios test manually]]&lt;br /&gt;
*[[CompileAndInstallRoot| Compile and install ROOT]]&lt;br /&gt;
*[[CleanCreamdb| Clean creamdb]]&lt;br /&gt;
*Reboot campaign for the workernodes :&lt;br /&gt;
**[[KernelUpdate| Reboot after a kernel update]]&lt;br /&gt;
**[[UpgradeWNstoSL5.5| Reboot after an OS upgrade]]&lt;br /&gt;
*[[ManageAllAdminScriptsWithGit| Central management of all the admin scripts with Git]]&lt;br /&gt;
*[[ConfigProxyCvmfs| Configuration of a proxy for CVMFS]]&lt;br /&gt;
**[[RecoverCvmfs| How to recover CVMFS]]&lt;br /&gt;
*[[TestNFSPerformance| How to test NFS Performance]]&lt;br /&gt;
*[[TetexNotAvailableInSL6| Alternatives to Tetex]]&lt;br /&gt;
*[[NewMethodUpdateKernelWorkernodes| A new easy method to update kernel on the workernodes]]&lt;br /&gt;
*[[AutomaticMailSendingFromCluster| About automatic mail sending from the cluster]]&lt;br /&gt;
*[[T2BTracAccess| T2B Trac access configuration]]&lt;br /&gt;
*[[WorkingWithRHEL7| Surviving to RHEL7]]&lt;br /&gt;
*[[CCMWithKerberos| Experimental : Securing profiles with Kerberos]]&lt;br /&gt;
*[[MigrateToMediaWiki| Migration of T2B Wiki from Trac to MediaWiki]]&lt;br /&gt;
*[[motd|Message Of The Day (motd)]]&lt;br /&gt;
&lt;br /&gt;
==== Quattor ====&lt;br /&gt;
*[http://quattor.begrid.be/trac/centralised-begrid-v5/wiki/BEgridAndQuattor &amp;lt;strike&amp;gt;BEgrid wiki&amp;lt;/strike&amp;gt;(OBSOLETE)]&lt;br /&gt;
*[[Test_things| &amp;lt;strike&amp;gt;Test things&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[Lemon_installation| &amp;lt;strike&amp;gt;Lemon installation&amp;lt;/strike&amp;gt;(OBSOLETE)]]&lt;br /&gt;
*[[QuattorPointers| &amp;lt;strike&amp;gt;Pointers&amp;lt;/strike&amp;gt;]]&amp;lt;strike&amp;gt; to more in-depth information on quattor&amp;lt;/strike&amp;gt;(OBSOLETE)&lt;br /&gt;
*[[AddingMachineToCluster| &amp;lt;strike&amp;gt;Adding&amp;lt;/strike&amp;gt;]]&amp;lt;strike&amp;gt; a new machine to the cluster&amp;lt;/strike&amp;gt;(OBSOLETE)&lt;br /&gt;
*[[AutomaticMachineTemplateGeneration| Automatic generation of hardware and profile templates for new workernodes]]&lt;br /&gt;
*[[InstallationBEgridClient0| Installation of a Quattor deployment server release 13.1]]&lt;br /&gt;
*[[InstallFilesNewOS| How to add a new OS to the Quattor Repository]]&lt;br /&gt;
*[[GenerateRPMFromATagInGithub| How to build an RPM from a tag in Github]]&lt;br /&gt;
*[[HowtoMigrateWNToCB9| How to migrate workernodes from CB8 to CB9]]&lt;br /&gt;
*[[WorkingInCB9| Working in CB9 (Quattor release &amp;gt;= 14.2)]]&lt;br /&gt;
*[[AideMemoire| FAQ - Aide-mémoire - Howtos]]&lt;br /&gt;
*[[BuildANewPysvnOnAiiServer| Howto build a new pysvn on a SL63 AII server]]&lt;br /&gt;
*[[QuattorFreeIPA| Quattor and FreeIPA]]&lt;br /&gt;
*[[NewRuncheck| Rewrite of the runcheck script in Perl]]&lt;br /&gt;
*[[HardDisksManagement| Hard disks management]]&lt;br /&gt;
*[[Aquilon| Aquilon]]&lt;br /&gt;
*[[LToS| Support of Long-tail of Science]]&lt;br /&gt;
&lt;br /&gt;
==== KVM virtualization ====&lt;br /&gt;
*[[VirtWithKVM| Virtualization of the new CREAM-CE on dom02 with KVM]]&lt;br /&gt;
*[[VirtWithKVM1| Installation of the new virtualization server dom04]]&lt;br /&gt;
*[[CreateVM| Easy creation of virtual machines]]&lt;br /&gt;
*[[MonitoringvHostswithGanglia| Monitoring the KVM vHosts with Ganglia]]&lt;br /&gt;
&lt;br /&gt;
==== T2B Cloud ====&lt;br /&gt;
*[[MigrationToOpenNebula| Transforming the KVM hypervisors farm into an OpenNebula cloud]]&lt;br /&gt;
*[[WorkingInT2BCloud| Working in the T2B cloud]]&lt;br /&gt;
&lt;br /&gt;
==== gUSE/WS-PGRADE portal ====&lt;br /&gt;
*[[PortalInstall| Portal installation]]&lt;br /&gt;
*[[PortalConfig| Portal configuration]]&lt;br /&gt;
*[[PortalOperations| Portal operations]]&lt;br /&gt;
&lt;br /&gt;
==== Migration to EMI-3 ====&lt;br /&gt;
*[[MigrateBEgridToEMI3_part1| BEgrid facilities - Part 1]]&lt;br /&gt;
*[[MigrateBEgridToEMI3_part2| BEgrid facilities - Part 2]]&lt;br /&gt;
&lt;br /&gt;
==== XEN ====&lt;br /&gt;
*[[Manage_XEN| Manage XEN]]&lt;br /&gt;
*[[XenQuattor| Xen and Quattor]]&lt;br /&gt;
&lt;br /&gt;
==== CEPH ====&lt;br /&gt;
*[[UnderstandingCeph| Understanding Ceph]]&lt;br /&gt;
*[[InstallCephWithQuattor| Installing Ceph with Quattor]]&lt;br /&gt;
*[[ExperimentsWithCeph| Experiments with Ceph]]&lt;br /&gt;
*[[CephBasics| Operating a Ceph cluster]]&lt;br /&gt;
&lt;br /&gt;
==== Logstash / Elasticsearch / Kibana (ELK) ====&lt;br /&gt;
machine: log10 | [http://log10.iihe.ac.be/index.html interface]  |  [http://log10.iihe.ac.be/HQ index manager]&lt;br /&gt;
* [[log_forwarding_with_quattor|Forwarding a log with rsyslog to logstash using quattor]]&lt;br /&gt;
* [[log_parsing_with_logstash|Parsing the logs with logstash]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Network ====&lt;br /&gt;
* [[network_bond_and_tag|Bonding of 2 interfaces + tagging of 2 vlans on the bond (PRIV+PUB)|]]&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=ApelGapPublishing&amp;diff=659</id>
		<title>ApelGapPublishing</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=ApelGapPublishing&amp;diff=659"/>
		<updated>2016-04-06T10:31:52Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* How to activate the gap publisher ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=== Goal ===&lt;br /&gt;
On this page, we explain how to publish gap with the EMI3 apel.&lt;br /&gt;
=== Where to find the results of the apel sync tests ? ===&lt;br /&gt;
These results are available [http://goc-accounting.grid-support.ac.uk/rss/BEgrid-ULB-VUB_Sync.html here].&lt;br /&gt;
=== How to activate the gap publisher ? ===&lt;br /&gt;
Log in on apel and modify the config file /etc/apel/client.cfg following the explanations given in commented lines :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Which records to send:&lt;br /&gt;
# latest - just send the new records to the server&lt;br /&gt;
# gap    - send records from between the specified dates (inclusive)&lt;br /&gt;
#          this is only for individual job records&lt;br /&gt;
# all    - send all records to the server.  Don&#039;t do this for individual&lt;br /&gt;
#          job records without talking to the apel team! &lt;br /&gt;
interval = latest&lt;br /&gt;
## only used if interval = gap&lt;br /&gt;
#gap_start = 2012-01-01&lt;br /&gt;
#gap_end = 2012-01-31&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
So, let&#039;s say that there is a gap in April 2014. We must the change the config so that it contains :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
interval = gap&lt;br /&gt;
gap_start = 2014-04-01&lt;br /&gt;
gap_end = 2014-04-30&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then, you can just wait for the apelclient to be started by cron, or you can run it manually with the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
date --iso-8601=seconds --utc; /usr/bin/apelclient) &amp;gt;&amp;gt; /var/log/apelclient.log 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
When it&#039;s done, don&#039;t forget to put back interval to &#039;latest&#039; and to comment gap_start and gap_end.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{TracNotice|{{PAGENAME}}}}&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=WorkingWithRHEL7&amp;diff=656</id>
		<title>WorkingWithRHEL7</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=WorkingWithRHEL7&amp;diff=656"/>
		<updated>2016-03-24T20:26:12Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Getting rid of firewalld and coming back to iptables ==&lt;br /&gt;
Here are the magic commands :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl stop firewalld&lt;br /&gt;
systemctl disable firewalld&lt;br /&gt;
yum remove firewalld&lt;br /&gt;
yum install iptables-services&lt;br /&gt;
systemctl enable iptables.service&lt;br /&gt;
systemctl enable ip6tables.service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Don&#039;t forget to configure SSH with  system-config-firewall-tui. And after that :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl start iptables.service&lt;br /&gt;
systemctl start ip6tables.service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Replacing NetworkManager by network ==&lt;br /&gt;
First, check if there is the &#039;&#039;&#039;GATEWAY&#039;&#039;&#039; parameter in the ifcfg-ethX of the NIC. (GATEWAY0 is not working without NM)&lt;br /&gt;
Then, type these commands :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl stop NetworkManager&lt;br /&gt;
systemctl disable NetworkManager&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now, we can restart network :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adoption of Predictable Network Interface Names ==&lt;br /&gt;
Names of NICs will change. To prepare yourself to the new naming rules, please read these documents :&lt;br /&gt;
*http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/&lt;br /&gt;
*http://blog.laimbock.com/2014/11/22/systemd-understanding-predictable-network-interface-names/&lt;br /&gt;
&lt;br /&gt;
== Default filesystem : xfs ==&lt;br /&gt;
RHEL7 adopts XFS as the default filesystem. It might generates troubles with our Quattor scdb -&amp;gt; check our filesystem layout templates.&lt;br /&gt;
&lt;br /&gt;
== Transition from SysVinit to Systemd ==&lt;br /&gt;
I find this [https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet link] useful.&lt;br /&gt;
&lt;br /&gt;
Here are some useful commands :&lt;br /&gt;
&lt;br /&gt;
=== List types of unit ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl -t help&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== List units of a certain type ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl list-units --type=[unit_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== List of services ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl list-units --type=service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Check status of services ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl status [service_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want something more concise info :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl is-enabled [service_name]&lt;br /&gt;
systemctl is-active [service_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Stop/start/restart service ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl start [service_name]&lt;br /&gt;
systemctl stop [service_name]&lt;br /&gt;
systemctl restart [service_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Enable/disable at boot time ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl enable [service_name]&lt;br /&gt;
systemctl disable [service_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{TracNotice|{{PAGENAME}}}}&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=WorkingWithRHEL7&amp;diff=655</id>
		<title>WorkingWithRHEL7</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=WorkingWithRHEL7&amp;diff=655"/>
		<updated>2016-03-24T20:24:08Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Transition from SysVinit to Systemd */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=== Getting rid of firewalld and coming back to iptables ===&lt;br /&gt;
Here are the magic commands :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl stop firewalld&lt;br /&gt;
systemctl disable firewalld&lt;br /&gt;
yum remove firewalld&lt;br /&gt;
yum install iptables-services&lt;br /&gt;
systemctl enable iptables.service&lt;br /&gt;
systemctl enable ip6tables.service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Don&#039;t forget to configure SSH with  system-config-firewall-tui. And after that :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl start iptables.service&lt;br /&gt;
systemctl start ip6tables.service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Replacing NetworkManager by network ===&lt;br /&gt;
First, check if there is the &#039;&#039;&#039;GATEWAY&#039;&#039;&#039; parameter in the ifcfg-ethX of the NIC. (GATEWAY0 is not working without NM)&lt;br /&gt;
Then, type these commands :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl stop NetworkManager&lt;br /&gt;
systemctl disable NetworkManager&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now, we can restart network :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Adoption of Predictable Network Interface Names ===&lt;br /&gt;
Names of NICs will change. To prepare yourself to the new naming rules, please read these documents :&lt;br /&gt;
*http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/&lt;br /&gt;
*http://blog.laimbock.com/2014/11/22/systemd-understanding-predictable-network-interface-names/&lt;br /&gt;
&lt;br /&gt;
=== Default filesystem : xfs ===&lt;br /&gt;
RHEL7 adopts XFS as the default filesystem. It might generates troubles with our Quattor scdb -&amp;gt; check our filesystem layout templates.&lt;br /&gt;
&lt;br /&gt;
=== Transition from SysVinit to Systemd ===&lt;br /&gt;
I find this [https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet link] useful.&lt;br /&gt;
&lt;br /&gt;
Here are some useful commands :&lt;br /&gt;
&lt;br /&gt;
==== List types of unit ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl -t help&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== List units of a certain type ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl list-units --type=[unit_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== List of services ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl list-units --type=service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Check status of services ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl status [service_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want something more concise info :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl is-enabled [service_name]&lt;br /&gt;
systemctl is-active [service_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Stop/start/restart service ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl start [service_name]&lt;br /&gt;
systemctl stop [service_name]&lt;br /&gt;
systemctl restart [service_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Enable/disable at boot time ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl enable [service_name]&lt;br /&gt;
systemctl disable [service_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{TracNotice|{{PAGENAME}}}}&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=WorkingWithRHEL7&amp;diff=654</id>
		<title>WorkingWithRHEL7</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=WorkingWithRHEL7&amp;diff=654"/>
		<updated>2016-03-24T20:20:38Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Transition from SysVinit to Systemd */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=== Getting rid of firewalld and coming back to iptables ===&lt;br /&gt;
Here are the magic commands :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl stop firewalld&lt;br /&gt;
systemctl disable firewalld&lt;br /&gt;
yum remove firewalld&lt;br /&gt;
yum install iptables-services&lt;br /&gt;
systemctl enable iptables.service&lt;br /&gt;
systemctl enable ip6tables.service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Don&#039;t forget to configure SSH with  system-config-firewall-tui. And after that :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl start iptables.service&lt;br /&gt;
systemctl start ip6tables.service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Replacing NetworkManager by network ===&lt;br /&gt;
First, check if there is the &#039;&#039;&#039;GATEWAY&#039;&#039;&#039; parameter in the ifcfg-ethX of the NIC. (GATEWAY0 is not working without NM)&lt;br /&gt;
Then, type these commands :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl stop NetworkManager&lt;br /&gt;
systemctl disable NetworkManager&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now, we can restart network :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Adoption of Predictable Network Interface Names ===&lt;br /&gt;
Names of NICs will change. To prepare yourself to the new naming rules, please read these documents :&lt;br /&gt;
*http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/&lt;br /&gt;
*http://blog.laimbock.com/2014/11/22/systemd-understanding-predictable-network-interface-names/&lt;br /&gt;
&lt;br /&gt;
=== Default filesystem : xfs ===&lt;br /&gt;
RHEL7 adopts XFS as the default filesystem. It might generates troubles with our Quattor scdb -&amp;gt; check our filesystem layout templates.&lt;br /&gt;
&lt;br /&gt;
=== Transition from SysVinit to Systemd ===&lt;br /&gt;
I find this [https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet link] useful.&lt;br /&gt;
&lt;br /&gt;
Here are some useful commands :&lt;br /&gt;
&lt;br /&gt;
==== List types of unit ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl -t help&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== List units of a certain type ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl list-units --type=[unit_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== List of services ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl list-units --type=service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Check status of services ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl status [service_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want something more concise info :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl is-enabled [service_name]&lt;br /&gt;
systemctl is-active [service_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Stop/start/restart service ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl start [service_name]&lt;br /&gt;
systemctl stop [service_name]&lt;br /&gt;
systemctl restart [service_name]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{TracNotice|{{PAGENAME}}}}&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=WorkingWithRHEL7&amp;diff=653</id>
		<title>WorkingWithRHEL7</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=WorkingWithRHEL7&amp;diff=653"/>
		<updated>2016-03-24T20:06:55Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Transition from SysVinit to Systemd */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=== Getting rid of firewalld and coming back to iptables ===&lt;br /&gt;
Here are the magic commands :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl stop firewalld&lt;br /&gt;
systemctl disable firewalld&lt;br /&gt;
yum remove firewalld&lt;br /&gt;
yum install iptables-services&lt;br /&gt;
systemctl enable iptables.service&lt;br /&gt;
systemctl enable ip6tables.service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Don&#039;t forget to configure SSH with  system-config-firewall-tui. And after that :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl start iptables.service&lt;br /&gt;
systemctl start ip6tables.service&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Replacing NetworkManager by network ===&lt;br /&gt;
First, check if there is the &#039;&#039;&#039;GATEWAY&#039;&#039;&#039; parameter in the ifcfg-ethX of the NIC. (GATEWAY0 is not working without NM)&lt;br /&gt;
Then, type these commands :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl stop NetworkManager&lt;br /&gt;
systemctl disable NetworkManager&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now, we can restart network :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Adoption of Predictable Network Interface Names ===&lt;br /&gt;
Names of NICs will change. To prepare yourself to the new naming rules, please read these documents :&lt;br /&gt;
*http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/&lt;br /&gt;
*http://blog.laimbock.com/2014/11/22/systemd-understanding-predictable-network-interface-names/&lt;br /&gt;
&lt;br /&gt;
=== Default filesystem : xfs ===&lt;br /&gt;
RHEL7 adopts XFS as the default filesystem. It might generates troubles with our Quattor scdb -&amp;gt; check our filesystem layout templates.&lt;br /&gt;
&lt;br /&gt;
=== Transition from SysVinit to Systemd ===&lt;br /&gt;
I find this [https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet link] useful.&lt;br /&gt;
&lt;br /&gt;
Here are some useful commands :&lt;br /&gt;
&lt;br /&gt;
* List types of unit&lt;br /&gt;
* Check status of services&lt;br /&gt;
&lt;br /&gt;
{{TracNotice|{{PAGENAME}}}}&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=MigrateToMediaWiki&amp;diff=652</id>
		<title>MigrateToMediaWiki</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=MigrateToMediaWiki&amp;diff=652"/>
		<updated>2016-03-18T16:19:37Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Create a new empty MediaWiki */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=== The plan of the migration ===&lt;br /&gt;
Here are the steps :&lt;br /&gt;
*creation of a new Apache webserver called &amp;quot;mona&amp;quot; with Quattor&lt;br /&gt;
*moving all the Trac wiki&#039;s to mona&lt;br /&gt;
*conversion of the Trac wiki&#039;s to MediaWiki (thanks to a homebrewed script)&lt;br /&gt;
*once everything is ready, switch to the new webserver in the DNS servers&lt;br /&gt;
&lt;br /&gt;
=== Creation of the new webserver ===&lt;br /&gt;
It is a virtual machine with 60GB of disk size (see the hardware template hardware/machine/Virtual/virtual_kvm_mon for more details). The software configuration is done through the template config/t2b_docu_server.&lt;br /&gt;
&lt;br /&gt;
=== Moving the Trac wiki&#039;s to the new webserver ===&lt;br /&gt;
For each wiki that was on the old mon server, create a new wiki on the new webserver with the trac-admin command. For example :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
trac-admin /var/www/trac/t2b initenv&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On the old webserver, you make a copy of each wiki to a nfs-shared directory. Going on with the previous example, it gives something like this :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
trac-admin /var/www/trac/t2b hotcopy /userbackup/sgerard/trac_backup_20-08-2015/t2b&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now, going back to the new webserver, we can simply overwrite the content of /var/www/trac/t2b with the content of the back-up copy :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cp -a -p /userbackup/sgerard/trac_backup_20-08-2015/t2b /var/www/trac/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Of course, it works because the nfs-share where the backup copy was done on the old server has been mounted on the new webserver thanks to Quattor ;-)&lt;br /&gt;
&lt;br /&gt;
=== Conversion from Trac to MediaWiki ===&lt;br /&gt;
For each Trac wiki that you would like to migrate, you have to create a new empty MediaWiki. I will explain all the details below.&lt;br /&gt;
&lt;br /&gt;
==== Create a new empty MediaWiki ====&lt;br /&gt;
Here are steps to create a wiki &amp;quot;test&amp;quot; :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /var/www/mediawiki199/test&lt;br /&gt;
chown apache:apache /var/www/mediawiki199/test&lt;br /&gt;
cd /var/www/mediawiki199/test&lt;br /&gt;
ln -s /usr/share/mediawiki199/* .&lt;br /&gt;
rm LocalSetting.php AdminSettings.php images mw-config&lt;br /&gt;
cp -a /usr/share/mediawiki199/mw-config .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, you have to open the URL of the wiki in your browser for the initialization. After you have filled in all the forms of the wizard, you arrive on page saying &amp;quot;Installation successful !&amp;quot;, and that you can download the LocalSettings.php and copy it to the directory of your wiki. After it is copied, don&#039;t forget to adapt the permissions :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chown apache:apache LocalSettings.php&lt;br /&gt;
chmod 640 LocalSettings.php&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that, you can remove the mw-config directory because it is only necessary to run the wizard :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rm -rf mw-config&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable image uploading, you need to do this in the directory of the wiki :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir images&lt;br /&gt;
chmod 755 images&lt;br /&gt;
chown apache:apache images&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And then, you need to add a few lines at the end of the LocalSetting.php config. file :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$wgUploadDirectory = &amp;quot;images&amp;quot;;&lt;br /&gt;
$wgUploadPath = &amp;quot;$wgScriptPath/images&amp;quot;;&lt;br /&gt;
$wgEnableUploads = true;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And that&#039;s it !&lt;br /&gt;
&lt;br /&gt;
==== Conversion ====&lt;br /&gt;
Now, for each wiki, the next step is to export all the pages in txt format, convert them in MediaWiki format, and then import the converted pages in the corresponding MediaWiki. To avoid hours of tedious manual tasks, a script has been created, and it is available on our SVN &amp;quot;scripting&amp;quot; repository : http://mon.iihe.ac.be/repos/scripting/mon/convert_trac_to_mediawiki/. Here is an example showing how to use it :&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The following commands will create a blank new wiki :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh mona&lt;br /&gt;
svn co http://mon.iihe.ac.be/repos/scripting/mon/convert_trac_to_mediawiki&lt;br /&gt;
cd convert_trac_to_mediawiki&lt;br /&gt;
./export_all_pages.pl /var/www/trac/t2b /var/www/mediawiki119/t2b&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Now you have to initialize the wiki. Go to the URL you have created :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://mona.iihe.ac.be/wiki/test&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and follow the instructions.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Access restriction to wiki pages ===&lt;br /&gt;
Anybody can read the pages, but to modify the content, you need to login. Login is done through SSL, meaning that your certificate will be asked, and if its DN contains &amp;quot;O=BEGRID&amp;quot; or &amp;quot;DC=cern&amp;quot;, it is accepted and an account is automatically created. This account remains associated to your DN.&lt;br /&gt;
&lt;br /&gt;
Now, the technical details : we use &amp;quot;Extension:SSL authentication&amp;quot; that is documented [https://www.mediawiki.org/wiki/Extension:SSL_authentication here]. But to force the switch to HTTPS when clicking on &amp;quot;Log in&amp;quot;, we had to add this in the httpd.conf :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
RewriteCond %{REQUEST_URI} ^/wiki/t2b/index.php$&lt;br /&gt;
RewriteCond %{QUERY_STRING} ^title=Special:UserLogin&lt;br /&gt;
RewriteCond %{REQUEST_METHOD} ^GET$&lt;br /&gt;
RewriteRule ^(.*)$ https://%{SERVER_NAME}/wiki/t2b/index.php/Main_Page [R]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sources ===&lt;br /&gt;
*http://mechanicalkeys.com/wiki/index.php?title=Install_MediaWiki_on_CentOS_6.4&lt;br /&gt;
*http://sharkysoft.com/wiki/how_to_configure_multiple_MediaWiki_instances_on_a_single_host&lt;br /&gt;
*https://www.mediawiki.org/wiki/Extension:TracWiki2MediaWiki&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{TracNotice|{{PAGENAME}}}}&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=Main_Page&amp;diff=625</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=Main_Page&amp;diff=625"/>
		<updated>2016-02-04T10:07:26Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Information for new users */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to the CMS Belgian T2 Wiki ==&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;span style=&amp;quot;font-size: 300%;&amp;quot;&amp;gt; [[first_access_to_t2b|=&amp;gt; FIRST ACCESS TO T2B &amp;lt;=]] &amp;lt;/span&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== General information for users ===&lt;br /&gt;
*[[Faq_t2b | FAQ]]&lt;br /&gt;
==== Information for new users ====&lt;br /&gt;
*[[T2bSupport| T2B Support guidelines]]&lt;br /&gt;
*[[First_access_to_t2b|Getting access to T2B]]&lt;br /&gt;
*[[Getting_a_certificate_for_the_T2|Certificates and VOs]]&lt;br /&gt;
&lt;br /&gt;
==== Using the Tier2 computing resources ====&lt;br /&gt;
*[[policies| Policies concerning the usage of computing resources.]]&lt;br /&gt;
*[[CurrentStatus| Current status of the Tier2]]&lt;br /&gt;
*[[Getting_started_with_the_CMSSW_software| Getting started with the CMSSW software]]&lt;br /&gt;
*[[Using_Git| Using Git]]&lt;br /&gt;
*[[Getting_started_with_the_MadGraph_software| Getting started with the MadGraph software]]&lt;br /&gt;
*Submitting jobs with CRAB&lt;br /&gt;
**[[gridSubmission_withCrab| To the worldwide grid]]&lt;br /&gt;
*Submitting jobs without CRAB&lt;br /&gt;
**[[localSubmission| To the local resources]]&lt;br /&gt;
**[[gridSubmission| To the worldwide grid]]&lt;br /&gt;
*[[GridStorageAccess| How to handle data on Grid storage]]&lt;br /&gt;
*[[FAQ_CMSSW_on_the_Grid| FAQ CMSSW on the Grid on proxy and more!]]&lt;br /&gt;
*[[OtherSoftware| Other software available at the T2]]&lt;br /&gt;
&lt;br /&gt;
==== Back-up procedures ====&lt;br /&gt;
*[[BackupDocsLinuxLaptop| Procedure to automate backups of personal documents (Linux laptops only)]]&lt;br /&gt;
*[[Backup| Backup scheme for the user disks]]&lt;br /&gt;
*[[AccidentalDeleteOfFiles| What to do if I have accidentally deleted some files on my personal computer ?]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Other topics ====&lt;br /&gt;
*[[User_Meetings]] list of user meetings with added transparencies&lt;br /&gt;
*[[Basic_computing_skills| Basic computing skills]]&lt;br /&gt;
*[[CernLxplus| Useful info on use of lxplus.cern.ch]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Dedicated twiki pages maintained by several research groups ===&lt;br /&gt;
*[[TtBar_Analysis_Framework| TtBar Analysis Framework (old)]]&lt;br /&gt;
*[[TopQuarkGroup| Top Quark Group wiki]]&lt;br /&gt;
*[[HEEP_Analysis_Framework| HEEP Analysis Framework]]&lt;br /&gt;
*[[V0_Analysis_wiki| V0 Analysis wiki]]&lt;br /&gt;
*[[Info_exchange| Higgs analysis]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Service work for CMS ===&lt;br /&gt;
*[[DDT]]&lt;br /&gt;
*[[CMSSWDeployment]]&lt;br /&gt;
*[[ProdAgentAllUsers| Prodagent for users]]&lt;br /&gt;
*[[TestStoreTemp| Writing tests in /store/temp/user on T2 SE&#039;s]]&lt;br /&gt;
*[[CrabServerInstall| CRAB Server installation]]&lt;br /&gt;
*[[CrabValidation| Basic validation of CRAB releases]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=== Obsolete twiki pages ===&lt;br /&gt;
*[[DIY-UI]]&lt;br /&gt;
*[[CrabIIHETransitionSLC4ToSLC5| Using CRAB at IIHE during the transition from SLC4 to SLC5]]&lt;br /&gt;
*[[DataSamplesRequests2007| Samples Requests for 2007]]&lt;br /&gt;
*[[DataSamplesRequests2008| Samples Requests for 2008]]&lt;br /&gt;
*[[CrabNewIIHE| Crab at IIHE]]&lt;br /&gt;
*[[OldMainPage| The old main page is kept here for reference]]&lt;br /&gt;
&lt;br /&gt;
== Admin section ==&lt;br /&gt;
*[[AdminPage| Pages for administrators]]&lt;br /&gt;
&lt;br /&gt;
== Getting started with MediaWiki ==&lt;br /&gt;
*[//meta.wikimedia.org/wiki/Help:Contents User&#039;s Guide]&lt;br /&gt;
*[//www.mediawiki.org/wiki/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
*[//www.mediawiki.org/wiki/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
*[https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=620</id>
		<title>AdminPage</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=620"/>
		<updated>2016-02-03T12:21:48Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Grid Configuration Issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Page for Administrators}}&lt;br /&gt;
==== Management of the whole cluster ====&lt;br /&gt;
*[[elog]]&lt;br /&gt;
*[[ShutDownCluster| How to properly switch off the cluster]]&lt;br /&gt;
*[[PutClusterOn| How to properly put the cluster on]]&lt;br /&gt;
==== CMS Services ====&lt;br /&gt;
*[[Phedex]]&lt;br /&gt;
*[[Heartbeat]]&lt;br /&gt;
*[[LoadTest]]&lt;br /&gt;
*[[FroNTier]]&lt;br /&gt;
*[[ProdAgent]]&lt;br /&gt;
*[[GitForSiteConf| instructions to commit siteconf to git]]&lt;br /&gt;
==== Grid Configuration Issues ====&lt;br /&gt;
*[[UpdateCertificates| Update the certificates of all our machines]]&lt;br /&gt;
*[[CreamIssues| Issues with cream and how to solve them]]&lt;br /&gt;
*[[PBS_TMPDIR| PBS TMPDIR]]&lt;br /&gt;
*[[APEL]]&lt;br /&gt;
*[[BDII]]&lt;br /&gt;
*[[FTS]]&lt;br /&gt;
*[[SL4_x86_64_WNs| SL4 x86_64 WNs]]&lt;br /&gt;
*[[CE_oveloaded| CE overloaded]]&lt;br /&gt;
*[[RB]]&lt;br /&gt;
*[[IPMI]]&lt;br /&gt;
*[[CA_certificates| &amp;lt;strike&amp;gt;Upgrade CA certificates&amp;lt;/strike&amp;gt; (OBSOLETE)]]&lt;br /&gt;
*[[Shutdown| Shutting down the cluster]]&lt;br /&gt;
*[[Software_Area_Switch| Software Area Switch]]&lt;br /&gt;
*[[KernelUpdate| Kernel mandatory updates for critical vulnerabilities]]&lt;br /&gt;
*[[Argus| Argus server and glexec on the workernodes]]&lt;br /&gt;
*[[ApelGapPublishing| Apel gap publishing]]&lt;br /&gt;
*[[UpdateCACertificates| Update IGTF CA certificates]]&lt;br /&gt;
&lt;br /&gt;
==== Files section ====&lt;br /&gt;
*[[DCache| dCache]]&lt;br /&gt;
**[[DeleteObsoleteFiles| Find obsolete files]]&lt;br /&gt;
**[[DCacheAdminMode| dCache admin mode]]&lt;br /&gt;
**[[FindSizeOnDcache| find size of a directory on dcache]]&lt;br /&gt;
**[[DcachePoolConfig1912| dCache Pool Postinstallation steps]]&lt;br /&gt;
**[[DCacheMaxMovers| Adapt max mover]]&lt;br /&gt;
**[[pnfsScripts| scripts to see on what pools files in a directory reside and to move them to other pools]]&lt;br /&gt;
*[[OlPnfsFiles| Procedure for removal of old user files on pnfs]]&lt;br /&gt;
*[[GetLostFiles| Retrieve lost files from datasets]]&lt;br /&gt;
*[[StorageConsistency| Storage Consistency]]&lt;br /&gt;
&lt;br /&gt;
==== Status and Monitoring ====&lt;br /&gt;
*[[ReservedWNs| List of reserved WNs]]&lt;br /&gt;
*[[Todo| Todo-list]]&lt;br /&gt;
*[[Monitoring]]&lt;br /&gt;
*[[Plans-Schedule| Plans/Schedule]]&lt;br /&gt;
*[[Grid_Troubleshooting_link| Grid Troubleshooting link]]&lt;br /&gt;
*[[Incident_reports| Incident Reports]]&lt;br /&gt;
*[[Dissapeared_software| How to put the software back]]&lt;br /&gt;
*[[Bad_WN| What to do when a WN sends a &amp;quot;bad_wn.pl&amp;quot; email to grid_admin ?]]&lt;br /&gt;
*[[Nagios_installation| Nagios Installation at IIHE]]&lt;br /&gt;
*[[Restart_DCache| How to restart DCache ]]&lt;br /&gt;
&lt;br /&gt;
==== Info ====&lt;br /&gt;
*[[General_info| General info]]&lt;br /&gt;
*[[Installing_CMSSW| Installing CMSSW]]&lt;br /&gt;
*[[Installing_CRAB| Installing CRAB]]&lt;br /&gt;
*[[System_Benchmarks| System Benchmarks]]&lt;br /&gt;
*[[T2BTrac| T2B Trac config info]]&lt;br /&gt;
*[[HardWare| Hardware information]]&lt;br /&gt;
*[[NetworkSetup| Network Setup]]&lt;br /&gt;
*[[SetupMonitoringControlerSunfireV20z| Setup Monitoring of LSI Disk Controler on Sunfire V20z Server]]&lt;br /&gt;
*[[LDAP_UCL_IIHE| LDAP authentication system for the replication between UCL and IIHE sites]]&lt;br /&gt;
*[[GridAdminSurvivalGuide| IIHE Grid-admin survival guide]]&lt;br /&gt;
*[[Solaris| Solaris 10]]&lt;br /&gt;
*[[SolarisSSD| Adding an SSD card and configuring RAID, zpools, filesystems and shares on the new Solaris fileserver]]&lt;br /&gt;
*[[LinuxAdminTricks| Linux tricks for admins]]&lt;br /&gt;
*[[CrabLocalPbsSubmission| How to implement local PBS submission with CRAB ?]]&lt;br /&gt;
*[[AddNewUserFromUCLToLDAP| How to create an account for a CMS user from UCL ?]]&lt;br /&gt;
*[[OSErrata| Deploying OS errata]]&lt;br /&gt;
*[[BenchmarkHEPSPEC06| Howto benchmark a node with HEPSPEC06]]&lt;br /&gt;
*[[Installing_dcache_pool| Install a new Dcache pool]]&lt;br /&gt;
*[[BackupUsersHomeDirs| Backup of the users home dirs on Jefke]]&lt;br /&gt;
*[[MonWebServicesMigration| Migration of mon and its Web services]]&lt;br /&gt;
*[[HOWTORestartNagiosTest| HOWTO restart a nagios test manually]]&lt;br /&gt;
*[[CompileAndInstallRoot| Compile and install ROOT]]&lt;br /&gt;
*[[CleanCreamdb| Clean creamdb]]&lt;br /&gt;
*Reboot campaign for the workernodes :&lt;br /&gt;
**[[KernelUpdate| Reboot after a kernel update]]&lt;br /&gt;
**[[UpgradeWNstoSL5.5| Reboot after an OS upgrade]]&lt;br /&gt;
*[[ManageAllAdminScriptsWithGit| Central management of all the admin scripts with Git]]&lt;br /&gt;
*[[ConfigProxyCvmfs| Configuration of a proxy for CVMFS]]&lt;br /&gt;
**[[RecoverCvmfs| How to recover CVMFS]]&lt;br /&gt;
*[[TestNFSPerformance| How to test NFS Performance]]&lt;br /&gt;
*[[TetexNotAvailableInSL6| Alternatives to Tetex]]&lt;br /&gt;
*[[NewMethodUpdateKernelWorkernodes| A new easy method to update kernel on the workernodes]]&lt;br /&gt;
*[[AutomaticMailSendingFromCluster| About automatic mail sending from the cluster]]&lt;br /&gt;
*[[T2BTracAccess| T2B Trac access configuration]]&lt;br /&gt;
*[[WorkingWithRHEL7| Surviving to RHEL7]]&lt;br /&gt;
*[[CCMWithKerberos| Experimental : Securing profiles with Kerberos]]&lt;br /&gt;
*[[MigrateToMediaWiki| Migration of T2B Wiki from Trac to MediaWiki]]&lt;br /&gt;
*[[motd|Message Of The Day (motd)]]&lt;br /&gt;
&lt;br /&gt;
==== Quattor ====&lt;br /&gt;
*[http://quattor.begrid.be/trac/centralised-begrid-v5/wiki/BEgridAndQuattor BEgrid wiki]&lt;br /&gt;
*[[Test_things| Test things]]&lt;br /&gt;
*[[Lemon_installation| Lemon installation]]&lt;br /&gt;
*[[QuattorPointers| Pointers]] to more in-depth information on quattor&lt;br /&gt;
*[[AddingMachineToCluster| Adding]] a new machine to the cluster&lt;br /&gt;
*[[AutomaticMachineTemplateGeneration| Automatic generation of hardware and profile templates for new workernodes]]&lt;br /&gt;
*[[InstallationBEgridClient0| Installation of a Quattor deployment server release 13.1]]&lt;br /&gt;
*[[InstallFilesNewOS| How to add a new OS to the Quattor Repository]]&lt;br /&gt;
*[[GenerateRPMFromATagInGithub| How to build an RPM from a tag in Github]]&lt;br /&gt;
*[[HowtoMigrateWNToCB9| How to migrate workernodes from CB8 to CB9]]&lt;br /&gt;
*[[WorkingInCB9| Working in CB9 (Quattor release &amp;gt;= 14.2)]]&lt;br /&gt;
*[[AideMemoire| FAQ - Aide-mémoire - Howtos]]&lt;br /&gt;
*[[BuildANewPysvnOnAiiServer| Howto build a new pysvn on a SL63 AII server]]&lt;br /&gt;
*[[QuattorFreeIPA| Quattor and FreeIPA]]&lt;br /&gt;
*[[NewRuncheck| Rewrite of the runcheck script in Perl]]&lt;br /&gt;
*[[HardDisksManagement| Hard disks management]]&lt;br /&gt;
*[[Aquilon| Aquilon]]&lt;br /&gt;
&lt;br /&gt;
==== KVM virtualization ====&lt;br /&gt;
*[[VirtWithKVM| Virtualization of the new CREAM-CE on dom02 with KVM]]&lt;br /&gt;
*[[VirtWithKVM1| Installation of the new virtualization server dom04]]&lt;br /&gt;
*[[CreateVM| Easy creation of virtual machines]]&lt;br /&gt;
*[[MonitoringvHostswithGanglia| Monitoring the KVM vHosts with Ganglia]]&lt;br /&gt;
&lt;br /&gt;
==== T2B Cloud ====&lt;br /&gt;
*[[MigrationToOpenNebula| Transforming the KVM hypervisors farm into an OpenNebula cloud]]&lt;br /&gt;
*[[WorkingInT2BCloud| Working in the T2B cloud]]&lt;br /&gt;
&lt;br /&gt;
==== gUSE/WS-PGRADE portal ====&lt;br /&gt;
*[[PortalInstall| Portal installation]]&lt;br /&gt;
*[[PortalConfig| Portal configuration]]&lt;br /&gt;
*[[PortalOperations| Portal operations]]&lt;br /&gt;
&lt;br /&gt;
==== Migration to EMI-3 ====&lt;br /&gt;
*[[MigrateBEgridToEMI3_part1| BEgrid facilities - Part 1]]&lt;br /&gt;
*[[MigrateBEgridToEMI3_part2| BEgrid facilities - Part 2]]&lt;br /&gt;
&lt;br /&gt;
==== XEN ====&lt;br /&gt;
*[[Manage_XEN| Manage XEN]]&lt;br /&gt;
*[[XenQuattor| Xen and Quattor]]&lt;br /&gt;
&lt;br /&gt;
==== CEPH ====&lt;br /&gt;
*[[UnderstandingCeph| Understanding Ceph]]&lt;br /&gt;
*[[InstallCephWithQuattor| Installing Ceph with Quattor]]&lt;br /&gt;
*[[ExperimentsWithCeph| Experiments with Ceph]]&lt;br /&gt;
*[[CephBasics| Operating a Ceph cluster]]&lt;br /&gt;
&lt;br /&gt;
==== Logstash / Elasticsearch / Kibana (ELK) ====&lt;br /&gt;
machine: log10 | [http://log10.iihe.ac.be/index.html interface]  |  [http://log10.iihe.ac.be/HQ index manager]&lt;br /&gt;
* [[log_forwarding_with_quattor|Forwarding a log with rsyslog to logstash using quattor]]&lt;br /&gt;
* [[log_parsing_with_logstash|Parsing the logs with logstash]]&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=UpdateCACertificates&amp;diff=619</id>
		<title>UpdateCACertificates</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=UpdateCACertificates&amp;diff=619"/>
		<updated>2016-02-03T11:08:13Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Procedure to update the IGTF CA certificates */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Procedure to update the IGTF CA certificates ===&lt;br /&gt;
&lt;br /&gt;
*First, it&#039;s important to check if the new RPMs are well available here : http://repository.egi.eu/sw/production/cas/1/current/RPMS/&lt;br /&gt;
*Login to yum&lt;br /&gt;
*Go to dir /var/www/html&lt;br /&gt;
*Launch ./manage_repositories.pl and choose first option in the menu to sync the local repo with the distant one&lt;br /&gt;
*Launch ./manage_repositories.pl and choose second option in the menu to create a new snapshot&lt;br /&gt;
*Take note of the timestamp of the snapshot&lt;br /&gt;
*In the Quattor SVN, open the template site/global_variables and in the nlist REPO_YUM_SNAPSHOT_DATE, change the timestamp for the key &#039;ca&#039;&lt;br /&gt;
*You can easily guess the rest : local build, commit, runcheck...&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=UpdateCACertificates&amp;diff=618</id>
		<title>UpdateCACertificates</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=UpdateCACertificates&amp;diff=618"/>
		<updated>2016-02-03T11:06:29Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Procedure to update the IGTF CA certificates */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Procedure to update the IGTF CA certificates ===&lt;br /&gt;
&lt;br /&gt;
*Login to yum&lt;br /&gt;
*Go to dir /var/www/html&lt;br /&gt;
*Launch ./manage_repositories.pl and choose first option in the menu to sync the local repo with the distant one&lt;br /&gt;
*Launch ./manage_repositories.pl and choose second option in the menu to create a new snapshot&lt;br /&gt;
*Take note of the timestamp of the snapshot&lt;br /&gt;
*In the Quattor SVN, open the template site/global_variables and in the nlist REPO_YUM_SNAPSHOT_DATE, change the timestamp for the key &#039;ca&#039;&lt;br /&gt;
*You can easily guess the rest : local build, commit, runcheck...&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=UpdateCACertificates&amp;diff=617</id>
		<title>UpdateCACertificates</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=UpdateCACertificates&amp;diff=617"/>
		<updated>2016-02-03T11:02:22Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: Created page with &amp;quot;=== Procedure to update the IGTF CA certificates ===  *Login to yum *Go to dir /var/www/html *Launch ./manage_repositories.pl and choose first option in the menu to sync the l...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Procedure to update the IGTF CA certificates ===&lt;br /&gt;
&lt;br /&gt;
*Login to yum&lt;br /&gt;
*Go to dir /var/www/html&lt;br /&gt;
*Launch ./manage_repositories.pl and choose first option in the menu to sync the local repo with the distant one&lt;br /&gt;
*Launch ./manage_repositories.pl and choose second option in the menu to create a new snapshot&lt;br /&gt;
*Take not of the timestamp of the snapshot&lt;br /&gt;
*In the Quattor SVN, open the template site/global_variables and in the nlist REPO_YUM_SNAPSHOT_DATE, change the timestamp for the key &#039;ca&#039;&lt;br /&gt;
*Build, commit, runcheck...&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=616</id>
		<title>AdminPage</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=616"/>
		<updated>2016-02-03T10:57:01Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Grid Configuration Issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Page for Administrators}}&lt;br /&gt;
==== Management of the whole cluster ====&lt;br /&gt;
*[[elog]]&lt;br /&gt;
*[[ShutDownCluster| How to properly switch off the cluster]]&lt;br /&gt;
*[[PutClusterOn| How to properly put the cluster on]]&lt;br /&gt;
==== CMS Services ====&lt;br /&gt;
*[[Phedex]]&lt;br /&gt;
*[[Heartbeat]]&lt;br /&gt;
*[[LoadTest]]&lt;br /&gt;
*[[FroNTier]]&lt;br /&gt;
*[[ProdAgent]]&lt;br /&gt;
*[[GitForSiteConf| instructions to commit siteconf to git]]&lt;br /&gt;
==== Grid Configuration Issues ====&lt;br /&gt;
*[[UpdateCertificates| Update the certificates of all our machines]]&lt;br /&gt;
*[[CreamIssues| Issues with cream and how to solve them]]&lt;br /&gt;
*[[PBS_TMPDIR| PBS TMPDIR]]&lt;br /&gt;
*[[APEL]]&lt;br /&gt;
*[[BDII]]&lt;br /&gt;
*[[FTS]]&lt;br /&gt;
*[[SL4_x86_64_WNs| SL4 x86_64 WNs]]&lt;br /&gt;
*[[CE_oveloaded| CE overloaded]]&lt;br /&gt;
*[[RB]]&lt;br /&gt;
*[[IPMI]]&lt;br /&gt;
*[[CA_certificates| Upgrade CA certificates]]&lt;br /&gt;
*[[Shutdown| Shutting down the cluster]]&lt;br /&gt;
*[[Software_Area_Switch| Software Area Switch]]&lt;br /&gt;
*[[KernelUpdate| Kernel mandatory updates for critical vulnerabilities]]&lt;br /&gt;
*[[Argus| Argus server and glexec on the workernodes]]&lt;br /&gt;
*[[ApelGapPublishing| Apel gap publishing]]&lt;br /&gt;
*[[UpdateCACertificates| Update IGTF CA certificates]]&lt;br /&gt;
&lt;br /&gt;
==== Files section ====&lt;br /&gt;
*[[DCache| dCache]]&lt;br /&gt;
**[[DeleteObsoleteFiles| Find obsolete files]]&lt;br /&gt;
**[[DCacheAdminMode| dCache admin mode]]&lt;br /&gt;
**[[FindSizeOnDcache| find size of a directory on dcache]]&lt;br /&gt;
**[[DcachePoolConfig1912| dCache Pool Postinstallation steps]]&lt;br /&gt;
**[[DCacheMaxMovers| Adapt max mover]]&lt;br /&gt;
**[[pnfsScripts| scripts to see on what pools files in a directory reside and to move them to other pools]]&lt;br /&gt;
*[[OlPnfsFiles| Procedure for removal of old user files on pnfs]]&lt;br /&gt;
*[[GetLostFiles| Retrieve lost files from datasets]]&lt;br /&gt;
*[[StorageConsistency| Storage Consistency]]&lt;br /&gt;
&lt;br /&gt;
==== Status and Monitoring ====&lt;br /&gt;
*[[ReservedWNs| List of reserved WNs]]&lt;br /&gt;
*[[Todo| Todo-list]]&lt;br /&gt;
*[[Monitoring]]&lt;br /&gt;
*[[Plans-Schedule| Plans/Schedule]]&lt;br /&gt;
*[[Grid_Troubleshooting_link| Grid Troubleshooting link]]&lt;br /&gt;
*[[Incident_reports| Incident Reports]]&lt;br /&gt;
*[[Dissapeared_software| How to put the software back]]&lt;br /&gt;
*[[Bad_WN| What to do when a WN sends a &amp;quot;bad_wn.pl&amp;quot; email to grid_admin ?]]&lt;br /&gt;
*[[Nagios_installation| Nagios Installation at IIHE]]&lt;br /&gt;
*[[Restart_DCache| How to restart DCache ]]&lt;br /&gt;
&lt;br /&gt;
==== Info ====&lt;br /&gt;
*[[General_info| General info]]&lt;br /&gt;
*[[Installing_CMSSW| Installing CMSSW]]&lt;br /&gt;
*[[Installing_CRAB| Installing CRAB]]&lt;br /&gt;
*[[System_Benchmarks| System Benchmarks]]&lt;br /&gt;
*[[T2BTrac| T2B Trac config info]]&lt;br /&gt;
*[[HardWare| Hardware information]]&lt;br /&gt;
*[[NetworkSetup| Network Setup]]&lt;br /&gt;
*[[SetupMonitoringControlerSunfireV20z| Setup Monitoring of LSI Disk Controler on Sunfire V20z Server]]&lt;br /&gt;
*[[LDAP_UCL_IIHE| LDAP authentication system for the replication between UCL and IIHE sites]]&lt;br /&gt;
*[[GridAdminSurvivalGuide| IIHE Grid-admin survival guide]]&lt;br /&gt;
*[[Solaris| Solaris 10]]&lt;br /&gt;
*[[SolarisSSD| Adding an SSD card and configuring RAID, zpools, filesystems and shares on the new Solaris fileserver]]&lt;br /&gt;
*[[LinuxAdminTricks| Linux tricks for admins]]&lt;br /&gt;
*[[CrabLocalPbsSubmission| How to implement local PBS submission with CRAB ?]]&lt;br /&gt;
*[[AddNewUserFromUCLToLDAP| How to create an account for a CMS user from UCL ?]]&lt;br /&gt;
*[[OSErrata| Deploying OS errata]]&lt;br /&gt;
*[[BenchmarkHEPSPEC06| Howto benchmark a node with HEPSPEC06]]&lt;br /&gt;
*[[Installing_dcache_pool| Install a new Dcache pool]]&lt;br /&gt;
*[[BackupUsersHomeDirs| Backup of the users home dirs on Jefke]]&lt;br /&gt;
*[[MonWebServicesMigration| Migration of mon and its Web services]]&lt;br /&gt;
*[[HOWTORestartNagiosTest| HOWTO restart a nagios test manually]]&lt;br /&gt;
*[[CompileAndInstallRoot| Compile and install ROOT]]&lt;br /&gt;
*[[CleanCreamdb| Clean creamdb]]&lt;br /&gt;
*Reboot campaign for the workernodes :&lt;br /&gt;
**[[KernelUpdate| Reboot after a kernel update]]&lt;br /&gt;
**[[UpgradeWNstoSL5.5| Reboot after an OS upgrade]]&lt;br /&gt;
*[[ManageAllAdminScriptsWithGit| Central management of all the admin scripts with Git]]&lt;br /&gt;
*[[ConfigProxyCvmfs| Configuration of a proxy for CVMFS]]&lt;br /&gt;
**[[RecoverCvmfs| How to recover CVMFS]]&lt;br /&gt;
*[[TestNFSPerformance| How to test NFS Performance]]&lt;br /&gt;
*[[TetexNotAvailableInSL6| Alternatives to Tetex]]&lt;br /&gt;
*[[NewMethodUpdateKernelWorkernodes| A new easy method to update kernel on the workernodes]]&lt;br /&gt;
*[[AutomaticMailSendingFromCluster| About automatic mail sending from the cluster]]&lt;br /&gt;
*[[T2BTracAccess| T2B Trac access configuration]]&lt;br /&gt;
*[[WorkingWithRHEL7| Surviving to RHEL7]]&lt;br /&gt;
*[[CCMWithKerberos| Experimental : Securing profiles with Kerberos]]&lt;br /&gt;
*[[MigrateToMediaWiki| Migration of T2B Wiki from Trac to MediaWiki]]&lt;br /&gt;
*[[motd|Message Of The Day (motd)]]&lt;br /&gt;
&lt;br /&gt;
==== Quattor ====&lt;br /&gt;
*[http://quattor.begrid.be/trac/centralised-begrid-v5/wiki/BEgridAndQuattor BEgrid wiki]&lt;br /&gt;
*[[Test_things| Test things]]&lt;br /&gt;
*[[Lemon_installation| Lemon installation]]&lt;br /&gt;
*[[QuattorPointers| Pointers]] to more in-depth information on quattor&lt;br /&gt;
*[[AddingMachineToCluster| Adding]] a new machine to the cluster&lt;br /&gt;
*[[AutomaticMachineTemplateGeneration| Automatic generation of hardware and profile templates for new workernodes]]&lt;br /&gt;
*[[InstallationBEgridClient0| Installation of a Quattor deployment server release 13.1]]&lt;br /&gt;
*[[InstallFilesNewOS| How to add a new OS to the Quattor Repository]]&lt;br /&gt;
*[[GenerateRPMFromATagInGithub| How to build an RPM from a tag in Github]]&lt;br /&gt;
*[[HowtoMigrateWNToCB9| How to migrate workernodes from CB8 to CB9]]&lt;br /&gt;
*[[WorkingInCB9| Working in CB9 (Quattor release &amp;gt;= 14.2)]]&lt;br /&gt;
*[[AideMemoire| FAQ - Aide-mémoire - Howtos]]&lt;br /&gt;
*[[BuildANewPysvnOnAiiServer| Howto build a new pysvn on a SL63 AII server]]&lt;br /&gt;
*[[QuattorFreeIPA| Quattor and FreeIPA]]&lt;br /&gt;
*[[NewRuncheck| Rewrite of the runcheck script in Perl]]&lt;br /&gt;
*[[HardDisksManagement| Hard disks management]]&lt;br /&gt;
*[[Aquilon| Aquilon]]&lt;br /&gt;
&lt;br /&gt;
==== KVM virtualization ====&lt;br /&gt;
*[[VirtWithKVM| Virtualization of the new CREAM-CE on dom02 with KVM]]&lt;br /&gt;
*[[VirtWithKVM1| Installation of the new virtualization server dom04]]&lt;br /&gt;
*[[CreateVM| Easy creation of virtual machines]]&lt;br /&gt;
*[[MonitoringvHostswithGanglia| Monitoring the KVM vHosts with Ganglia]]&lt;br /&gt;
&lt;br /&gt;
==== T2B Cloud ====&lt;br /&gt;
*[[MigrationToOpenNebula| Transforming the KVM hypervisors farm into an OpenNebula cloud]]&lt;br /&gt;
*[[WorkingInT2BCloud| Working in the T2B cloud]]&lt;br /&gt;
&lt;br /&gt;
==== gUSE/WS-PGRADE portal ====&lt;br /&gt;
*[[PortalInstall| Portal installation]]&lt;br /&gt;
*[[PortalConfig| Portal configuration]]&lt;br /&gt;
*[[PortalOperations| Portal operations]]&lt;br /&gt;
&lt;br /&gt;
==== Migration to EMI-3 ====&lt;br /&gt;
*[[MigrateBEgridToEMI3_part1| BEgrid facilities - Part 1]]&lt;br /&gt;
*[[MigrateBEgridToEMI3_part2| BEgrid facilities - Part 2]]&lt;br /&gt;
&lt;br /&gt;
==== XEN ====&lt;br /&gt;
*[[Manage_XEN| Manage XEN]]&lt;br /&gt;
*[[XenQuattor| Xen and Quattor]]&lt;br /&gt;
&lt;br /&gt;
==== CEPH ====&lt;br /&gt;
*[[UnderstandingCeph| Understanding Ceph]]&lt;br /&gt;
*[[InstallCephWithQuattor| Installing Ceph with Quattor]]&lt;br /&gt;
*[[ExperimentsWithCeph| Experiments with Ceph]]&lt;br /&gt;
*[[CephBasics| Operating a Ceph cluster]]&lt;br /&gt;
&lt;br /&gt;
==== Logstash / Elasticsearch / Kibana (ELK) ====&lt;br /&gt;
machine: log10 | [http://log10.iihe.ac.be/index.html interface]  |  [http://log10.iihe.ac.be/HQ index manager]&lt;br /&gt;
* [[log_forwarding_with_quattor|Forwarding a log with rsyslog to logstash using quattor]]&lt;br /&gt;
* [[log_parsing_with_logstash|Parsing the logs with logstash]]&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=PortalOperations&amp;diff=561</id>
		<title>PortalOperations</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=PortalOperations&amp;diff=561"/>
		<updated>2015-11-19T21:47:55Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* How to stop/start the portal ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Portal operations}}&lt;br /&gt;
== How to stop/start the portal ? ==&lt;br /&gt;
Login on guse as root and source the ~/.guserc file :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source ~/.guserc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To stop the portal :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/local/guse/apache-tomcat-6.0.39/bin&lt;br /&gt;
./shutdown.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To start the portal :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./startup.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After a restart of the portal, you might need to start the Service Wizard by opening the following URL :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://&amp;lt; URL_install_backend&amp;gt;:8080/information&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In the popup menu Database Configuration the MySQL server must be redefined. Please do not forget to replace the string IP by the IP of the MySQL server.&lt;br /&gt;
&lt;br /&gt;
== Log files ==&lt;br /&gt;
Tomcat logfiles are here :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/local/guse/apache-tomcat-6.0.39/logs/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Liferay logfiles :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/local/guse/logs/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{TracNotice|{{PAGENAME}}}}&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=559</id>
		<title>InstallCephWithQuattor</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=559"/>
		<updated>2015-11-17T16:29:48Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Before deploying Ceph */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The component ncm-ceph is based on ceph-deploy. It is documented [http://quattor-core.readthedocs.org/en/latest/components/ceph/ here].&lt;br /&gt;
&lt;br /&gt;
Prior to the installation, you must assign the roles to the machines of the future cluster. The possible roles are : MON, MDS, OSD, and DEPLOY. The machine with the DEPLOY role will be the one from which deployment will be done, and so, it is the machine on which the component ncm-ceph will be executed. You need at least one MON machine, but to avoid bottlenecks when the number of OSD increases, it&#039;s better to have 3 MON machines (always have an odd number of MON machines to be able to keep a quorum in case of failure of one MON machine). The MDS is only needed if you want to use CephFS. The OSD daemons will run on machines that will contains the storage disks (one OSD daemon per disk). When doing tests, you can put the MON daemons on OSD machines, but on large cluster in production, it&#039;s better to put the MON daemons on dedicated machines.&lt;br /&gt;
&lt;br /&gt;
The layout and the parameters of the cluster are described in the template site/ceph.&lt;br /&gt;
&lt;br /&gt;
== Before deploying Ceph ==&lt;br /&gt;
The deployment of the Ceph cluster is triggered by this line in the profile of the DEPLOY machine :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
include &#039;site/ceph&#039;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
But before the deployment, your OSD machines must be prepared : the OSD disks must be partitioned and formatted correctly. In a OSD machine, at least one partition or one drive must be prepared to store the objects. To achieve this, you need to define an appropriate disk layout. You will find an example of such a disk layout in template site/filesystems/ceph_fs_test. Below, the intesting parts of this template :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
variable CEPH_FS = &#039;btrfs&#039;;&lt;br /&gt;
&lt;br /&gt;
variable DISK_VOLUME_PARAMS = {&lt;br /&gt;
	t = dict();&lt;br /&gt;
...&lt;br /&gt;
	t[&#039;logvda&#039;] = dict(&lt;br /&gt;
		&#039;size&#039;,			-1,&lt;br /&gt;
		&#039;mountpoint&#039;,	&#039;/var/lib/ceph/log/vda4&#039;,&lt;br /&gt;
		&#039;fstype&#039;,		CEPH_FS,&lt;br /&gt;
		&#039;type&#039;,			&#039;partition&#039;,&lt;br /&gt;
		&#039;device&#039;,		&#039;vda4&#039;,&lt;br /&gt;
		&#039;preserve&#039;,		true,&lt;br /&gt;
		&#039;mkfsopts&#039;,		CEPH_DISK_OPTIONS[CEPH_FS][&#039;mkfsopts&#039;],&lt;br /&gt;
		&#039;mountopts&#039;,	CEPH_DISK_OPTIONS[CEPH_FS][&#039;mountopts&#039;],&lt;br /&gt;
	);&lt;br /&gt;
	t[&#039;osd&#039;] = dict(&lt;br /&gt;
		&#039;size&#039;,			-1,&lt;br /&gt;
		&#039;mountpoint&#039;,	&#039;/var/lib/ceph/osd/vdb&#039;,&lt;br /&gt;
		&#039;fstype&#039;,		CEPH_FS,&lt;br /&gt;
		&#039;type&#039;,			&#039;partition&#039;,&lt;br /&gt;
		&#039;device&#039;,		&#039;vdb1&#039;,&lt;br /&gt;
		&#039;preserve&#039;,		true,&lt;br /&gt;
		&#039;mkfsopts&#039;,		CEPH_DISK_OPTIONS[CEPH_FS][&#039;mkfsopts&#039;],&lt;br /&gt;
		&#039;mountopts&#039;,	CEPH_DISK_OPTIONS[CEPH_FS][&#039;mountopts&#039;],&lt;br /&gt;
	);&lt;br /&gt;
	t;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this extract, you can see that our OSD has two drives (vda and vdb). We choose to use the filesystem btrfs to store the objects (you should use ext4 of xfs for production cluster). The last partition of vda (vda4) will contain the journal of the filesystem, and the vdb disk will only contain one partition (vdb1) to store the data.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=558</id>
		<title>InstallCephWithQuattor</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=558"/>
		<updated>2015-11-17T16:23:00Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Before deploying Ceph */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The component ncm-ceph is based on ceph-deploy. It is documented [http://quattor-core.readthedocs.org/en/latest/components/ceph/ here].&lt;br /&gt;
&lt;br /&gt;
Prior to the installation, you must assign the roles to the machines of the future cluster. The possible roles are : MON, MDS, OSD, and DEPLOY. The machine with the DEPLOY role will be the one from which deployment will be done, and so, it is the machine on which the component ncm-ceph will be executed. You need at least one MON machine, but to avoid bottlenecks when the number of OSD increases, it&#039;s better to have 3 MON machines (always have an odd number of MON machines to be able to keep a quorum in case of failure of one MON machine). The MDS is only needed if you want to use CephFS. The OSD daemons will run on machines that will contains the storage disks (one OSD daemon per disk). When doing tests, you can put the MON daemons on OSD machines, but on large cluster in production, it&#039;s better to put the MON daemons on dedicated machines.&lt;br /&gt;
&lt;br /&gt;
The layout and the parameters of the cluster are described in the template site/ceph.&lt;br /&gt;
&lt;br /&gt;
== Before deploying Ceph ==&lt;br /&gt;
The deployment of the Ceph cluster is triggered by this line in the profile of the DEPLOY machine :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
include &#039;site/ceph&#039;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
But before the deployment, your OSD machines must be prepared : the OSD disks must be partitioned and formatted correctly. In a OSD machine, at least one partition or one drive must be prepared to store the objects. To achieve this, you need to define an appropriate disk layout. You will find an example of such a disk layout in template site/filesystems/ceph_fs_test. Below, the intesting parts of this template :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
variable CEPH_FS = &#039;btrfs&#039;;&lt;br /&gt;
&lt;br /&gt;
variable DISK_VOLUME_PARAMS = {&lt;br /&gt;
	t = dict();&lt;br /&gt;
...&lt;br /&gt;
	t[&#039;logvda&#039;] = dict(&lt;br /&gt;
		&#039;size&#039;,			-1,&lt;br /&gt;
		&#039;mountpoint&#039;,	&#039;/var/lib/ceph/log/vda4&#039;,&lt;br /&gt;
		&#039;fstype&#039;,		CEPH_FS,&lt;br /&gt;
		&#039;type&#039;,			&#039;partition&#039;,&lt;br /&gt;
		&#039;device&#039;,		&#039;vda4&#039;,&lt;br /&gt;
		&#039;preserve&#039;,		true,&lt;br /&gt;
		&#039;mkfsopts&#039;,		CEPH_DISK_OPTIONS[CEPH_FS][&#039;mkfsopts&#039;],&lt;br /&gt;
		&#039;mountopts&#039;,	CEPH_DISK_OPTIONS[CEPH_FS][&#039;mountopts&#039;],&lt;br /&gt;
	);&lt;br /&gt;
	t[&#039;osd&#039;] = dict(&lt;br /&gt;
		&#039;size&#039;,			-1,&lt;br /&gt;
		&#039;mountpoint&#039;,	&#039;/var/lib/ceph/osd/vdb&#039;,&lt;br /&gt;
		&#039;fstype&#039;,		CEPH_FS,&lt;br /&gt;
		&#039;type&#039;,			&#039;partition&#039;,&lt;br /&gt;
		&#039;device&#039;,		&#039;vdb1&#039;,&lt;br /&gt;
		&#039;preserve&#039;,		true,&lt;br /&gt;
		&#039;mkfsopts&#039;,		CEPH_DISK_OPTIONS[CEPH_FS][&#039;mkfsopts&#039;],&lt;br /&gt;
		&#039;mountopts&#039;,	CEPH_DISK_OPTIONS[CEPH_FS][&#039;mountopts&#039;],&lt;br /&gt;
	);&lt;br /&gt;
	t;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=557</id>
		<title>InstallCephWithQuattor</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=557"/>
		<updated>2015-11-17T16:18:41Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Before deploying Ceph */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The component ncm-ceph is based on ceph-deploy. It is documented [http://quattor-core.readthedocs.org/en/latest/components/ceph/ here].&lt;br /&gt;
&lt;br /&gt;
Prior to the installation, you must assign the roles to the machines of the future cluster. The possible roles are : MON, MDS, OSD, and DEPLOY. The machine with the DEPLOY role will be the one from which deployment will be done, and so, it is the machine on which the component ncm-ceph will be executed. You need at least one MON machine, but to avoid bottlenecks when the number of OSD increases, it&#039;s better to have 3 MON machines (always have an odd number of MON machines to be able to keep a quorum in case of failure of one MON machine). The MDS is only needed if you want to use CephFS. The OSD daemons will run on machines that will contains the storage disks (one OSD daemon per disk). When doing tests, you can put the MON daemons on OSD machines, but on large cluster in production, it&#039;s better to put the MON daemons on dedicated machines.&lt;br /&gt;
&lt;br /&gt;
The layout and the parameters of the cluster are described in the template site/ceph.&lt;br /&gt;
&lt;br /&gt;
== Before deploying Ceph ==&lt;br /&gt;
The deployment of the Ceph cluster is triggered by this line in the profile of the DEPLOY machine :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
include &#039;site/ceph&#039;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
But before the deployment, your OSD machines must be prepared : the OSD disks must be partitioned and formatted correctly. In a OSD machine, at least one partition or one drive must be prepared to store the objects. To achieve this, you need to define an appropriate disk layout.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=556</id>
		<title>InstallCephWithQuattor</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=556"/>
		<updated>2015-11-17T16:14:42Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The component ncm-ceph is based on ceph-deploy. It is documented [http://quattor-core.readthedocs.org/en/latest/components/ceph/ here].&lt;br /&gt;
&lt;br /&gt;
Prior to the installation, you must assign the roles to the machines of the future cluster. The possible roles are : MON, MDS, OSD, and DEPLOY. The machine with the DEPLOY role will be the one from which deployment will be done, and so, it is the machine on which the component ncm-ceph will be executed. You need at least one MON machine, but to avoid bottlenecks when the number of OSD increases, it&#039;s better to have 3 MON machines (always have an odd number of MON machines to be able to keep a quorum in case of failure of one MON machine). The MDS is only needed if you want to use CephFS. The OSD daemons will run on machines that will contains the storage disks (one OSD daemon per disk). When doing tests, you can put the MON daemons on OSD machines, but on large cluster in production, it&#039;s better to put the MON daemons on dedicated machines.&lt;br /&gt;
&lt;br /&gt;
The layout and the parameters of the cluster are described in the template site/ceph.&lt;br /&gt;
&lt;br /&gt;
== Before deploying Ceph ==&lt;br /&gt;
The deployment of the Ceph cluster is triggered by this line in the DEPLOY machine :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
include &#039;site/ceph&#039;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
But before the deployment, your OSD machines must be prepared : the OSD disks must be partitioned and formatted correctly. In a OSD machine, at least one partition or one drive must be prepared to store the objects. To achieve this, you need to define an appropriate disk layout.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=555</id>
		<title>InstallCephWithQuattor</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=555"/>
		<updated>2015-11-17T16:04:58Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The component ncm-ceph is based on ceph-deploy. It is documented [http://quattor-core.readthedocs.org/en/latest/components/ceph/ here].&lt;br /&gt;
&lt;br /&gt;
Prior to the installation, you must assign the roles to the machines of the future cluster. The possible roles are : MON, MDS, OSD, and DEPLOY. The machine with the DEPLOY role will be the one from which deployment will be done, and so, it is the machine on which the component ncm-ceph will be executed. You need at least one MON machine, but to avoid bottlenecks when the number of OSD increases, it&#039;s better to have 3 MON machines (always have an odd number of MON machines to be able to have a quorum). The MDS is only needed if you want to use CephFS. The OSD daemons will run on machines that will contains the storage disks. When doing tests, you can put the MON daemons on OSD machines, but on large cluster in production, it&#039;s better to put the MON daemons on dedicated machines.&lt;br /&gt;
&lt;br /&gt;
The layout and the parameters of the cluster are described in the template site/ceph.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=554</id>
		<title>InstallCephWithQuattor</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=554"/>
		<updated>2015-11-17T15:43:56Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
Prior to the installation, you must assign the roles to the machines of the cluster. The possible roles are : MON, MDS, OSD, and DEPLOY. You need at least one MON machine, but to avoid bottlenecks when the number of OSD increases, it&#039;s better to have 3 MON machines (always have an odd number of MON machines to be able to have a quorum). The MDS is only needed if you want to use CephFS. The OSD daemons will run on machines that will contains the storage disks.&lt;br /&gt;
&lt;br /&gt;
The layout and the parameters of the cluster are described in the template site/ceph.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=553</id>
		<title>InstallCephWithQuattor</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=InstallCephWithQuattor&amp;diff=553"/>
		<updated>2015-11-17T15:37:26Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: Created page with &amp;quot;== Overview == Prior to the installation, you must assign the roles to the machines of the cluster. The possible roles are : MON, MDS, OSD. You need at least one MON machine, ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
Prior to the installation, you must assign the roles to the machines of the cluster. The possible roles are : MON, MDS, OSD. You need at least one MON machine, but to avoid bottlenecks when the number of OSD increases, it&#039;s better to have 3 MON machines (always have an odd number of MON machines to be able to have a quorum). The MDS is only needed if you want to use CephFS.&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=552</id>
		<title>AdminPage</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=AdminPage&amp;diff=552"/>
		<updated>2015-11-17T15:23:47Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* CEPH */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Page for Administrators}}&lt;br /&gt;
==== Management of the whole cluster ====&lt;br /&gt;
*[[elog]]&lt;br /&gt;
*[[ShutDownCluster| How to properly switch off the cluster]]&lt;br /&gt;
*[[PutClusterOn| How to properly put the cluster on]]&lt;br /&gt;
==== CMS Services ====&lt;br /&gt;
*[[Phedex]]&lt;br /&gt;
*[[Heartbeat]]&lt;br /&gt;
*[[LoadTest]]&lt;br /&gt;
*[[FroNTier]]&lt;br /&gt;
*[[ProdAgent]]&lt;br /&gt;
*[[GitForSiteConf| instructions to commit siteconf to git]]&lt;br /&gt;
==== Grid Configuration Issues ====&lt;br /&gt;
*[[UpdateCertificates| Update the certificates of all our machines]]&lt;br /&gt;
*[[CreamIssues| Issues with cream and how to solve them]]&lt;br /&gt;
*[[PBS_TMPDIR| PBS TMPDIR]]&lt;br /&gt;
*[[APEL]]&lt;br /&gt;
*[[BDII]]&lt;br /&gt;
*[[FTS]]&lt;br /&gt;
*[[SL4_x86_64_WNs| SL4 x86_64 WNs]]&lt;br /&gt;
*[[CE_oveloaded| CE overloaded]]&lt;br /&gt;
*[[RB]]&lt;br /&gt;
*[[IPMI]]&lt;br /&gt;
*[[CA_certificates| Upgrade CA certificates]]&lt;br /&gt;
*[[Shutdown| Shutting down the cluster]]&lt;br /&gt;
*[[Software_Area_Switch| Software Area Switch]]&lt;br /&gt;
*[[KernelUpdate| Kernel mandatory updates for critical vulnerabilities]]&lt;br /&gt;
*[[Argus| Argus server and glexec on the workernodes]]&lt;br /&gt;
*[[ApelGapPublishing| Apel gap publishing]]&lt;br /&gt;
==== Files section ====&lt;br /&gt;
*[[DCache| dCache]]&lt;br /&gt;
**[[DeleteObsoleteFiles| Find obsolete files]]&lt;br /&gt;
**[[DCacheAdminMode| dCache admin mode]]&lt;br /&gt;
**[[FindSizeOnDcache| find size of a directory on dcache]]&lt;br /&gt;
**[[DcachePoolConfig1912| dCache Pool Postinstallation steps]]&lt;br /&gt;
**[[DCacheMaxMovers| Adapt max mover]]&lt;br /&gt;
**[[pnfsScripts| scripts to see on what pools files in a directory reside and to move them to other pools]]&lt;br /&gt;
*[[OlPnfsFiles| Procedure for removal of old user files on pnfs]]&lt;br /&gt;
*[[GetLostFiles| Retrieve lost files from datasets]]&lt;br /&gt;
*[[StorageConsistency| Storage Consistency]]&lt;br /&gt;
&lt;br /&gt;
==== Status and Monitoring ====&lt;br /&gt;
*[[ReservedWNs| List of reserved WNs]]&lt;br /&gt;
*[[Todo| Todo-list]]&lt;br /&gt;
*[[Monitoring]]&lt;br /&gt;
*[[Plans-Schedule| Plans/Schedule]]&lt;br /&gt;
*[[Grid_Troubleshooting_link| Grid Troubleshooting link]]&lt;br /&gt;
*[[Incident_reports| Incident Reports]]&lt;br /&gt;
*[[Dissapeared_software| How to put the software back]]&lt;br /&gt;
*[[Bad_WN| What to do when a WN sends a &amp;quot;bad_wn.pl&amp;quot; email to grid_admin ?]]&lt;br /&gt;
*[[Nagios_installation| Nagios Installation at IIHE]]&lt;br /&gt;
*[[Restart_DCache| How to restart DCache ]]&lt;br /&gt;
&lt;br /&gt;
==== Info ====&lt;br /&gt;
*[[General_info| General info]]&lt;br /&gt;
*[[Installing_CMSSW| Installing CMSSW]]&lt;br /&gt;
*[[Installing_CRAB| Installing CRAB]]&lt;br /&gt;
*[[System_Benchmarks| System Benchmarks]]&lt;br /&gt;
*[[T2BTrac| T2B Trac config info]]&lt;br /&gt;
*[[HardWare| Hardware information]]&lt;br /&gt;
*[[NetworkSetup| Network Setup]]&lt;br /&gt;
*[[SetupMonitoringControlerSunfireV20z| Setup Monitoring of LSI Disk Controler on Sunfire V20z Server]]&lt;br /&gt;
*[[LDAP_UCL_IIHE| LDAP authentication system for the replication between UCL and IIHE sites]]&lt;br /&gt;
*[[GridAdminSurvivalGuide| IIHE Grid-admin survival guide]]&lt;br /&gt;
*[[Solaris| Solaris 10]]&lt;br /&gt;
*[[SolarisSSD| Adding an SSD card and configuring RAID, zpools, filesystems and shares on the new Solaris fileserver]]&lt;br /&gt;
*[[LinuxAdminTricks| Linux tricks for admins]]&lt;br /&gt;
*[[CrabLocalPbsSubmission| How to implement local PBS submission with CRAB ?]]&lt;br /&gt;
*[[AddNewUserFromUCLToLDAP| How to create an account for a CMS user from UCL ?]]&lt;br /&gt;
*[[OSErrata| Deploying OS errata]]&lt;br /&gt;
*[[BenchmarkHEPSPEC06| Howto benchmark a node with HEPSPEC06]]&lt;br /&gt;
*[[Installing_dcache_pool| Install a new Dcache pool]]&lt;br /&gt;
*[[BackupUsersHomeDirs| Backup of the users home dirs on Jefke]]&lt;br /&gt;
*[[MonWebServicesMigration| Migration of mon and its Web services]]&lt;br /&gt;
*[[HOWTORestartNagiosTest| HOWTO restart a nagios test manually]]&lt;br /&gt;
*[[CompileAndInstallRoot| Compile and install ROOT]]&lt;br /&gt;
*[[CleanCreamdb| Clean creamdb]]&lt;br /&gt;
*Reboot campaign for the workernodes :&lt;br /&gt;
**[[KernelUpdate| Reboot after a kernel update]]&lt;br /&gt;
**[[UpgradeWNstoSL5.5| Reboot after an OS upgrade]]&lt;br /&gt;
*[[ManageAllAdminScriptsWithSVN| Central management of all the admin scripts with SVN]]&lt;br /&gt;
*[[ConfigProxyCvmfs| Configuration of a proxy for CVMFS]]&lt;br /&gt;
**[[RecoverCvmfs| How to recover CVMFS]]&lt;br /&gt;
*[[TestNFSPerformance| How to test NFS Performance]]&lt;br /&gt;
*[[TetexNotAvailableInSL6| Alternatives to Tetex]]&lt;br /&gt;
*[[NewMethodUpdateKernelWorkernodes| A new easy method to update kernel on the workernodes]]&lt;br /&gt;
*[[AutomaticMailSendingFromCluster| About automatic mail sending from the cluster]]&lt;br /&gt;
*[[T2BTracAccess| T2B Trac access configuration]]&lt;br /&gt;
*[[WorkingWithRHEL7| Surviving to RHEL7]]&lt;br /&gt;
*[[CCMWithKerberos| Experimental : Securing profiles with Kerberos]]&lt;br /&gt;
*[[MigrateToMediaWiki| Migration of T2B Wiki from Trac to MediaWiki]]&lt;br /&gt;
&lt;br /&gt;
==== Quattor ====&lt;br /&gt;
*[http://quattor.begrid.be/trac/centralised-begrid-v5/wiki/BEgridAndQuattor BEgrid wiki]&lt;br /&gt;
*[[Test_things| Test things]]&lt;br /&gt;
*[[Lemon_installation| Lemon installation]]&lt;br /&gt;
*[[QuattorPointers| Pointers]] to more in-depth information on quattor&lt;br /&gt;
*[[AddingMachineToCluster| Adding]] a new machine to the cluster&lt;br /&gt;
*[[AutomaticMachineTemplateGeneration| Automatic generation of hardware and profile templates for new workernodes]]&lt;br /&gt;
*[[InstallationBEgridClient0| Installation of a Quattor deployment server release 13.1]]&lt;br /&gt;
*[[InstallFilesNewOS| How to add a new OS to the Quattor Repository]]&lt;br /&gt;
*[[GenerateRPMFromATagInGithub| How to build an RPM from a tag in Github]]&lt;br /&gt;
*[[HowtoMigrateWNToCB9| How to migrate workernodes from CB8 to CB9]]&lt;br /&gt;
*[[WorkingInCB9| Working in CB9 (Quattor release &amp;gt;= 14.2)]]&lt;br /&gt;
*[[AideMemoire| FAQ - Aide-mémoire - Howtos]]&lt;br /&gt;
*[[BuildANewPysvnOnAiiServer| Howto build a new pysvn on a SL63 AII server]]&lt;br /&gt;
*[[QuattorFreeIPA| Quattor and FreeIPA]]&lt;br /&gt;
*[[NewRuncheck| Rewrite of the runcheck script in Perl]]&lt;br /&gt;
*[[HardDisksManagement| Hard disks management]]&lt;br /&gt;
*[[Aquilon| Aquilon]]&lt;br /&gt;
&lt;br /&gt;
==== KVM virtualization ====&lt;br /&gt;
*[[VirtWithKVM| Virtualization of the new CREAM-CE on dom02 with KVM]]&lt;br /&gt;
*[[VirtWithKVM1| Installation of the new virtualization server dom04]]&lt;br /&gt;
*[[CreateVM| Easy creation of virtual machines]]&lt;br /&gt;
*[[MonitoringvHostswithGanglia| Monitoring the KVM vHosts with Ganglia]]&lt;br /&gt;
&lt;br /&gt;
==== T2B Cloud ====&lt;br /&gt;
*[[MigrationToOpenNebula| Transforming the KVM hypervisors farm into an OpenNebula cloud]]&lt;br /&gt;
*[[WorkingInT2BCloud| Working in the T2B cloud]]&lt;br /&gt;
&lt;br /&gt;
==== gUSE/WS-PGRADE portal ====&lt;br /&gt;
*[[PortalInstall| Portal installation]]&lt;br /&gt;
*[[PortalConfig| Portal configuration]]&lt;br /&gt;
*[[PortalOperations| Portal operations]]&lt;br /&gt;
&lt;br /&gt;
==== Migration to EMI-3 ====&lt;br /&gt;
*[[MigrateBEgridToEMI3_part1| BEgrid facilities - Part 1]]&lt;br /&gt;
*[[MigrateBEgridToEMI3_part2| BEgrid facilities - Part 2]]&lt;br /&gt;
&lt;br /&gt;
==== XEN ====&lt;br /&gt;
*[[Manage_XEN| Manage XEN]]&lt;br /&gt;
*[[XenQuattor| Xen and Quattor]]&lt;br /&gt;
&lt;br /&gt;
==== CEPH ====&lt;br /&gt;
*[[UnderstandingCeph| Understanding Ceph]]&lt;br /&gt;
*[[InstallCephWithQuattor| Installing Ceph with Quattor]]&lt;br /&gt;
*[[ExperimentsWithCeph| Experiments with Ceph]]&lt;br /&gt;
*[[CephBasics| Operating a Ceph cluster]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{TracNotice|{{PAGENAME}}}}&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=514</id>
		<title>CephBasics</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=514"/>
		<updated>2015-11-04T11:28:38Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Operating a Ceph cluster}}&lt;br /&gt;
== Where to operate ? ==&lt;br /&gt;
All operations should be done on the ceph-admin machine. In our experimental testbed, it is cephq1.wn.iihe.ac.be.&lt;br /&gt;
== Check Ceph cluster status ==&lt;br /&gt;
The command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
shows the actual status of the Ceph cluster :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@cephq1 ~]# ceph -s&lt;br /&gt;
    cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d&lt;br /&gt;
     health HEALTH_OK&lt;br /&gt;
     monmap e1: 3 mons at {cephq2=192.168.41.2:6789/0,cephq3=192.168.41.3:6789/0,cephq4=192.168.41.4:6789/0}&lt;br /&gt;
            election epoch 8, quorum 0,1,2 cephq2,cephq3,cephq4&lt;br /&gt;
     osdmap e78: 6 osds: 6 up, 6 in&lt;br /&gt;
      pgmap v1293: 192 pgs, 2 pools, 0 bytes data, 0 objects&lt;br /&gt;
            27920 kB used, 4021 GB / 4106 GB avail&lt;br /&gt;
                 192 active+clean&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The following command displays a real-time summary of the status of the cluster, and major events :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Remove OSDs ==&lt;br /&gt;
When you want to remove a machine that contains OSDs (for example : decommissioning of an old equipment out of warranty), there is manual procedure to follow in order to do things in a clean way and to avoid problems :&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Identify the OSDs hosted by the machine with the command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd tree&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Take the OSDs out of the cluster :&lt;br /&gt;
Before you remove an OSD, it is usually up and in. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd out {osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Repeat this operation for all the OSDs on the machine.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Monitor the data migration :&lt;br /&gt;
Once you have taken the OSDs out of the cluster, Ceph will begin rebalancing the cluster by migrating placement groups out of the OSDs you&#039;ve removed. You must follow this process with the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You should see the placement group states change from active+clean to active, some degraded objects, and finally active+clean when migration completes.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Stopping the OSDs daemons&lt;br /&gt;
After you take an OSD out of the cluster, it may still be running. That is, the OSD may be up and out. You must stop your OSD before you remove it from the configuration :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh {osd-host}&lt;br /&gt;
/etc/init.d/ceph stop osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Repeat the last command for all the OSDs on the machine.)&lt;br /&gt;
As a result, a &amp;quot;ceph -s&amp;quot; should show the OSDs as down.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Removing the OSDs :&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove OSDs from crush map :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd crush remove osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove the OSD authentication key :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph auth del osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove the OSDs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd rm {osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=512</id>
		<title>CephBasics</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=512"/>
		<updated>2015-11-03T22:42:05Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Remove OSDs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Operating a Ceph cluster}}&lt;br /&gt;
== Check Ceph cluster status ==&lt;br /&gt;
The command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
shows the actual status of the Ceph cluster :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@cephq1 ~]# ceph -s&lt;br /&gt;
    cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d&lt;br /&gt;
     health HEALTH_OK&lt;br /&gt;
     monmap e1: 3 mons at {cephq2=192.168.41.2:6789/0,cephq3=192.168.41.3:6789/0,cephq4=192.168.41.4:6789/0}&lt;br /&gt;
            election epoch 8, quorum 0,1,2 cephq2,cephq3,cephq4&lt;br /&gt;
     osdmap e78: 6 osds: 6 up, 6 in&lt;br /&gt;
      pgmap v1293: 192 pgs, 2 pools, 0 bytes data, 0 objects&lt;br /&gt;
            27920 kB used, 4021 GB / 4106 GB avail&lt;br /&gt;
                 192 active+clean&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The following command displays a real-time summary of the status of the cluster, and major events :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Remove OSDs ==&lt;br /&gt;
When you want to remove a machine that contains OSDs (for example : decommissioning of an old equipment out of warranty), there is manual procedure to follow in order to do things in a clean way and to avoid problems :&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Identify the OSDs hosted by the machine with the command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd tree&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Take the OSDs out of the cluster :&lt;br /&gt;
Before you remove an OSD, it is usually up and in. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd out {osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Repeat this operation for all the OSDs on the machine.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Monitor the data migration :&lt;br /&gt;
Once you have taken the OSDs out of the cluster, Ceph will begin rebalancing the cluster by migrating placement groups out of the OSDs you&#039;ve removed. You must follow this process with the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You should see the placement group states change from active+clean to active, some degraded objects, and finally active+clean when migration completes.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Stopping the OSDs daemons&lt;br /&gt;
After you take an OSD out of the cluster, it may still be running. That is, the OSD may be up and out. You must stop your OSD before you remove it from the configuration :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh {osd-host}&lt;br /&gt;
/etc/init.d/ceph stop osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Repeat the last command for all the OSDs on the machine.)&lt;br /&gt;
As a result, a &amp;quot;ceph -s&amp;quot; should show the OSDs as down.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Removing the OSDs :&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove OSDs from crush map :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd crush remove osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove the OSD authentication key :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph auth del osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove the OSDs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd rm {osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=511</id>
		<title>CephBasics</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=511"/>
		<updated>2015-11-03T22:40:37Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Remove OSDs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Operating a Ceph cluster}}&lt;br /&gt;
== Check Ceph cluster status ==&lt;br /&gt;
The command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
shows the actual status of the Ceph cluster :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@cephq1 ~]# ceph -s&lt;br /&gt;
    cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d&lt;br /&gt;
     health HEALTH_OK&lt;br /&gt;
     monmap e1: 3 mons at {cephq2=192.168.41.2:6789/0,cephq3=192.168.41.3:6789/0,cephq4=192.168.41.4:6789/0}&lt;br /&gt;
            election epoch 8, quorum 0,1,2 cephq2,cephq3,cephq4&lt;br /&gt;
     osdmap e78: 6 osds: 6 up, 6 in&lt;br /&gt;
      pgmap v1293: 192 pgs, 2 pools, 0 bytes data, 0 objects&lt;br /&gt;
            27920 kB used, 4021 GB / 4106 GB avail&lt;br /&gt;
                 192 active+clean&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The following command displays a real-time summary of the status of the cluster, and major events :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Remove OSDs ==&lt;br /&gt;
When you want to remove a machine that contains OSDs (for example : decommissioning of an old equipment out of warranty), there is manual procedure to follow in order to do things in a clean way and to avoid problems :&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Identify the OSDs hosted by the machine with the command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd tree&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Take the OSDs out of the cluster :&lt;br /&gt;
Before you remove an OSD, it is usually up and in. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd out {osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Repeat this operation for all the OSDs on the machine.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Monitor the data migration :&lt;br /&gt;
Once you have taken the OSDs out of the cluster, Ceph will begin rebalancing the cluster by migrating placement groups out of the OSDs you&#039;ve removed. You must follow this process with the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You should see the placement group states change from active+clean to active, some degraded objects, and finally active+clean when migration completes.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Stopping the OSDs daemons&lt;br /&gt;
After you take an OSD out of the cluster, it may still be running. That is, the OSD may be up and out. You must stop your OSD before you remove it from the configuration :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh {osd-host}&lt;br /&gt;
/etc/init.d/ceph stop osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Repeat the last command for all the OSDs on the machine.)&lt;br /&gt;
As a result, a &amp;quot;ceph -s&amp;quot; should show the OSDs as down.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Removing the OSDs :&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove OSDs from crush map :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd crush remove osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove the OSD authentication key :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph auth del osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove the OSDs&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd rm {osd-num}&lt;br /&gt;
#for example&lt;br /&gt;
ceph osd rm 1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=510</id>
		<title>CephBasics</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=510"/>
		<updated>2015-11-03T22:31:37Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Remove OSDs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Operating a Ceph cluster}}&lt;br /&gt;
== Check Ceph cluster status ==&lt;br /&gt;
The command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
shows the actual status of the Ceph cluster :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@cephq1 ~]# ceph -s&lt;br /&gt;
    cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d&lt;br /&gt;
     health HEALTH_OK&lt;br /&gt;
     monmap e1: 3 mons at {cephq2=192.168.41.2:6789/0,cephq3=192.168.41.3:6789/0,cephq4=192.168.41.4:6789/0}&lt;br /&gt;
            election epoch 8, quorum 0,1,2 cephq2,cephq3,cephq4&lt;br /&gt;
     osdmap e78: 6 osds: 6 up, 6 in&lt;br /&gt;
      pgmap v1293: 192 pgs, 2 pools, 0 bytes data, 0 objects&lt;br /&gt;
            27920 kB used, 4021 GB / 4106 GB avail&lt;br /&gt;
                 192 active+clean&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The following command displays a real-time summary of the status of the cluster, and major events :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Remove OSDs ==&lt;br /&gt;
When you want to remove a machine that contains OSDs (for example : decommissioning of an old equipment out of warranty), there is manual procedure to follow in order to do things in a clean way and to avoid problems :&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Identify the OSDs hosted by the machine with the command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd tree&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Take the OSDs out of the cluster :&lt;br /&gt;
Before you remove an OSD, it is usually up and in. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd out {osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Repeat this operation for all the OSDs on the machine.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Monitor the data migration :&lt;br /&gt;
Once you have taken the OSDs out of the cluster, Ceph will begin rebalancing the cluster by migrating placement groups out of the OSDs you&#039;ve removed. You must follow this process with the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You should see the placement group states change from active+clean to active, some degraded objects, and finally active+clean when migration completes.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Stopping the OSDs daemons&lt;br /&gt;
After you take an OSD out of the cluster, it may still be running. That is, the OSD may be up and out. You must stop your OSD before you remove it from the configuration :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh {osd-host}&lt;br /&gt;
/etc/init.d/ceph stop osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Repeat the last command for all the OSDs on the machine.)&lt;br /&gt;
As a result, a &amp;quot;ceph -s&amp;quot; should show the OSDs as down.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Removing the OSDs :&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove OSDs from crush map :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd crush remove osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove the OSD authentication key :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph auth del osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=509</id>
		<title>CephBasics</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=509"/>
		<updated>2015-11-03T22:29:58Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Remove OSDs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Operating a Ceph cluster}}&lt;br /&gt;
== Check Ceph cluster status ==&lt;br /&gt;
The command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
shows the actual status of the Ceph cluster :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@cephq1 ~]# ceph -s&lt;br /&gt;
    cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d&lt;br /&gt;
     health HEALTH_OK&lt;br /&gt;
     monmap e1: 3 mons at {cephq2=192.168.41.2:6789/0,cephq3=192.168.41.3:6789/0,cephq4=192.168.41.4:6789/0}&lt;br /&gt;
            election epoch 8, quorum 0,1,2 cephq2,cephq3,cephq4&lt;br /&gt;
     osdmap e78: 6 osds: 6 up, 6 in&lt;br /&gt;
      pgmap v1293: 192 pgs, 2 pools, 0 bytes data, 0 objects&lt;br /&gt;
            27920 kB used, 4021 GB / 4106 GB avail&lt;br /&gt;
                 192 active+clean&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The following command displays a real-time summary of the status of the cluster, and major events :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Remove OSDs ==&lt;br /&gt;
When you want to remove a machine that contains OSDs (for example : decommissioning of an old equipment out of warranty), there is manual procedure to follow in order to do things in a clean way and to avoid problems :&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Identify the OSDs hosted by the machine with the command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd tree&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Take the OSDs out of the cluster :&lt;br /&gt;
Before you remove an OSD, it is usually up and in. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd out {osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Repeat this operation for all the OSDs on the machine.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Monitor the data migration :&lt;br /&gt;
Once you have taken the OSDs out of the cluster, Ceph will begin rebalancing the cluster by migrating placement groups out of the OSDs you&#039;ve removed. You must follow this process with the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You should see the placement group states change from active+clean to active, some degraded objects, and finally active+clean when migration completes.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Stopping the OSDs daemons&lt;br /&gt;
After you take an OSD out of the cluster, it may still be running. That is, the OSD may be up and out. You must stop your OSD before you remove it from the configuration :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh {osd-host}&lt;br /&gt;
/etc/init.d/ceph stop osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Repeat the last command for all the OSDs on the machine.)&lt;br /&gt;
As a result, a &amp;quot;ceph -s&amp;quot; should show the OSDs as down.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Removing the OSDs&lt;br /&gt;
&amp;lt;li&amp;gt;Remove OSDs from crush map :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd crush remove osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove the OSD authentication key :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph auth del osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
	<entry>
		<id>https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=508</id>
		<title>CephBasics</title>
		<link rel="alternate" type="text/html" href="https://t2bwiki.iihe.ac.be/index.php?title=CephBasics&amp;diff=508"/>
		<updated>2015-11-03T22:27:59Z</updated>

		<summary type="html">&lt;p&gt;Stephane GERARD: /* Remove OSDs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Operating a Ceph cluster}}&lt;br /&gt;
== Check Ceph cluster status ==&lt;br /&gt;
The command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
shows the actual status of the Ceph cluster :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@cephq1 ~]# ceph -s&lt;br /&gt;
    cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d&lt;br /&gt;
     health HEALTH_OK&lt;br /&gt;
     monmap e1: 3 mons at {cephq2=192.168.41.2:6789/0,cephq3=192.168.41.3:6789/0,cephq4=192.168.41.4:6789/0}&lt;br /&gt;
            election epoch 8, quorum 0,1,2 cephq2,cephq3,cephq4&lt;br /&gt;
     osdmap e78: 6 osds: 6 up, 6 in&lt;br /&gt;
      pgmap v1293: 192 pgs, 2 pools, 0 bytes data, 0 objects&lt;br /&gt;
            27920 kB used, 4021 GB / 4106 GB avail&lt;br /&gt;
                 192 active+clean&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The following command displays a real-time summary of the status of the cluster, and major events :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Remove OSDs ==&lt;br /&gt;
When you want to remove a machine that contains OSDs (for example : decommissioning of an old equipment out of warranty), there is manual procedure to follow in order to do things in a clean way and to avoid problems :&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Identify the OSDs hosted by the machine with the command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd tree&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Take the OSDs out of the cluster :&lt;br /&gt;
Before you remove an OSD, it is usually up and in. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd out {osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Repeat this operation for all the OSDs on the machine.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Monitor the data migration :&lt;br /&gt;
Once you have taken the OSDs out of the cluster, Ceph will begin rebalancing the cluster by migrating placement groups out of the OSDs you&#039;ve removed. You must follow this process with the following command :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph -w&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You should see the placement group states change from active+clean to active, some degraded objects, and finally active+clean when migration completes.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Stopping the OSDs daemons&lt;br /&gt;
After you take an OSD out of the cluster, it may still be running. That is, the OSD may be up and out. You must stop your OSD before you remove it from the configuration :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh {osd-host}&lt;br /&gt;
/etc/init.d/ceph stop osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Repeat the last command for all the OSDs on the machine.)&lt;br /&gt;
As a result, a &amp;quot;ceph -s&amp;quot; should show the OSDs as down.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Removing the OSDs&lt;br /&gt;
##Remove OSDs from crush map :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph osd crush remove osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
##Remove the OSD authentication key :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ceph auth del osd.{osd-num}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;/div&gt;</summary>
		<author><name>Stephane GERARD</name></author>
	</entry>
</feed>