This page compromises the shift guide for the ALICE Forward Multiplicity Detector.
Appendix
This document consist of one single large HTML file with a number of images linked in. The images are done by using the entry point Screen Shot in the FMD Menu. Images can be edited using Gimp (available on alifmdwn002).
The document resides in alifmdwn002:~fmd/public_html/shift_guide and a copy is made at top.nbi.dk:~hehi/public_html/fmd/fmd/shift_guide/.
During an FMD shift you have a number of things to do. The design of the of the FMD control system is such, that it shouldn't be too hard to get these things done.
If you are not familiar with the FMD or you need a reminder, you should perhaps read the section Overview of the FMD.
The duties of an FMD shifter are roughly as follows.
Here's how a typical shift might look like.
The first thing you should do, is to log in the FMD ACR machine. It is located in the far back of the 1st side room.
The login details are as follows
Machine: | aldaqacr37 |
User name: | fmd |
Password: | ******* |
If you do not know the password, contact one of the FMD contact persons.
Once you are logged in, the first thing to do, is to start up the FMDMenu. To do so, do one of
prompt> fmdmenu &
This will bring up a small window in the top-right of the screen that looks like
The menu consists of 3 parts:
Pressing the Shifter menu item will bring up the shift-relevant sub-menu. It looks like
Press the Shifter menu item on the FMDMenu to bring up the shifter sub-menu. Select the item DCS UI menu item to bring up the DCS UI. A MS Windows log-in screen will appear.
To log in specify your NICE credentials. Your NICE account must be registered as part of the FMD_SHIFTER group. If it is not, you will not be able to log in. To be added to that group contact the FMD Team.
After you logged into the MS Windows machine (the DCS operator node) you will be presented with the FMD DCS UI and an authorisation dialog:
Log-in details are as follows:
User name: | your NICE user name |
Password: | ******* |
Note, that in the future, the password will be your NICE password.
If no one has ownership of the DCS
FSM, the
shifter must take ownership. The padlock symbol next to
the FMD_DCS button (see Navigating the
DCS UI) indicates whether it is owned by the shifter (green,
closed — ), by someone
else (red, closed
—
), or no one (grey,
open —
). The shifter
should click the padlock and select Take.
The shifter now has control of the detector, and the padlock should be closed and green.
Once done with the detector, the shifter must release the lock by clicking the lock symbol — on the main window, and select Release in the drop-down menu.
The detector is now released and the padlock should be open and grey, and free for others to pick up.
Once you have release the lock, press the large Close button in the bottom right corner of the main window.
Below is an image of the main DCS UI panel with indications of the important parts.
Below is a description of the main panel corresponding to the FSM node FMD_DCS. However, the rest of the node panels are similar.
The same type of button and drop-down menu is present on most other panels. Again, it allows you to see the state and control the FSM of the node (and it's daughters) for which you are viewing the panel
Important: This button is a last resort. One must try to use the state machine to shut down gracefully before using this button.
To use the button, right click to unlock it, and then left click. It will pop up a dialog asking you for confirmation. If left alone, the button will be locked after a few seconds.
Note that these elements may update slower than normally.
The various panels of the control system will provide hopefully enough information for the shifter to diagnose problems before he contacts an expert. All the panels are explained in the appendix DCS UI Panels.
If more documentation is needed for these panels, please contact the FMD Team.
If the detector is off, then the DCS UI will look like
Next, you need to bring the detector to STANDBY. Do this by selecting the FMD_DCS button in the main panel and select GO_STANDBY
The detector will check if cooling is on, and turn on low-voltages for the RCUs. The UI will reflect this
This process can take a while (a few minutes) so be patient. Once the detector has finished for STANDBY the UI will look like
At this point, we should turn on the front-ends and configure the detector for the type of run we need. Again, press the FMD_DCS button, and select the item CONFIGURE in the drop-down menu
N.B.: The CONFIGURE action can be taken from any of the states STANDBY, STBY_CONFIGURED, or BEAM_TUNING, so though the starting point might be different, the steps and responses involved are always the same.
A dialog will appear and ask you for the run type tag.
Valid tags are
When the detector configures the front-end electronics, it shifts to the state DOWNLOADING
Once the process completes, all low-voltages are turned on, and the detector is properly configured. The state will then be STBY_CONFIGURED.
N.B.: States STBY_CONFIGURED (BEAM_TUNING) are redundant. Actions allowed in STBY_CONFIGURED are also available from BEAM_TUNING. Switching from STBY_CONFIGURED to BEAM_TUNING and back is instantaneous — it is merely a re-naming of the state.
After this, we need to turn on the high-voltages to provide the bias voltage over the silicon bulk. We do that by going to the state READY Once we have done that, the detector is no longer in a safe state since the silicon is now sensitive to charged particles. Therefor, one should only bring the detector to READY when needed.
Again, press FMD_DCS in the main panel, and select the item GO_READY in the drop-down menu.
During this process, the detector switches to the state MOVING_READY.
After this, we are in the state READY and we can now take data with the detector
If the detector is in the state READY, then main will look as at the end of Turning on the detector (Large image).
You basically do the things to turn on the detector, but in reverse. First thing is to click on the FMD_DCS button, and select GO_STBY_CONF in the drop-down menu.
The detector will go into the state MOVING_STBY_CONF
When finished, no bias voltages are on, while the front-end remains configured and low voltages are on. The state is STBY_CONFIGURED.
Next step is to turn off the front end cards and low voltages to these. Click the FMD_DCS button and select GO_STANDBY from the drop-down menu.
The detector enters the state CLEARING while it is shutting off the front-end.
The detector is now in the state STANDBY.
At this point, only the RCU power is on. To turn completely off, we must execute the GO_OFF command. Click the FMD_DCS button and select the GO_OFF entry in the drop-down menu.
The detector is now turning everything off.
Upon completion, the detector is OFF
To take data for Standalone, Pedestal evaluation, or Gain evaluation runs, you need to open the DCA of the FMD (other runs are managed by the central shifters and coordinated by the shift leader).
In the Shifter menu of the FMDMenu select the item ECS Menu.
This will open a splash window where you select the FMD
The splash will then disappear, and three new windows will appear
It is also recommended that you open the Read-out Status window by clicking the readout status entry in the ECS Menu. This will show the current event rate, used GDC and LDCs and other run information.
To start a run:
Stand-alone runs are runs in which data is only collected by a single detector and are triggered by a CTP emulator. The trigger frequency can be configured through the LTU client available from the Expert part of the FMDMenu or from the DCA menu bar.
To take a stand-alone data run, the shifter should follow the following procedure.
From time to time the shifter must take calibration runs. There are two kinds of calibration runs needed by the FMD:
In these runs, the detector collects 1100 events with out the base-line subtraction and zero-suppression filters turned on in the ALTROs. The data is analysed by a on-line DA and the result is uploaded to the DAQ file exchange server. Later, the off-line SHUTTLE will pick up these files and push the result into OCDB. The off-line reconstruction pick up this data from OCDB.
The DA also stores a local copy of the result on the LDC which PedConf will later pick up and load into the ALTRO pedestal memory. The files are stored in the directory
aldaqpcL:/dateSite/ldc-FMD-D-0/work/ddlE.ddl
where D is the detector number, and
Detector | 1 | 2 | 3 |
L | 156 | 157 | 158 |
E | 3072 | 3073 | 3074 |
The detector must be calibrated for PEDESTAL (see the box Valid tags). If not, the pedestal data uploaded to the ALTROs will be wrong, resulting in large event sizes and corrupted physics data.
When the detector is configured for GAIN (see the box Valid tags), the data arriving to the ALTROs are generated by a pulse send to the pre-amplifier and shaper circuits of the VA1 chips. A single input channel on the VA1 chip pulsed at a time, and the pulse is stepped up by the BC on the digitizer cards. For each of the 128 input channels and for pulse size injected, a number of events is collected before progressing to the next pulse size or input channel. Management of this procedure is done automatically by the BC, and the DAQ is configured to take enough events (currently 102700 events).
The data from the Gain Evaluation Run is processed and analysed by a on-line DA and the result is uploaded to the DAQ file exchange server. From there, the off-line SHUTTLE will later pick it up, and put the result on the OCDB for the off-line reconstruction to pick up and use.
It is important to configure the detector for GAIN before starting a Gain Evaluation Run. If not, the gains pushed to the OCDB will be corrupt, resulting in wrong reconstruction of the physics data.
For both kinds of calibrations runs, it is important that there is no beam in the LHC. If there is, the resulting pedestals and gains will be corrupted, again resulting in wrong reconstruction of the physics data. An appropriate time for the calibration runs is when the machine is ramping down the magnets after a fill or dump. At that time, there's no beam in the LHC and ALICE does not need to be Safe since beam is not imminent.
The requirements of the calibration runs are summarised below.
Calibration run type |
Configuration tag |
# of events* | Trigger rate |
Time to complete** |
Frequency | Beam conditions |
---|---|---|---|---|---|---|
Pedestal | PEDESTAL | >1000 | ≤100Hz | ~ 5minutes | 1-2/day | No beam |
Gain | GAIN | >102400 | ≤100Hz | ~ 25minutes | 1/2day | No beam |
*Handled automatically by ECS.
**Includes set-up time and DA post-processing.
Currently, there is no automation for calibration runs, and it is up to the shifter to properly set-up and execute the run. Hopefully this will change in the near future.
N.B.: The importance of configuring the detector for the right type of run cannot be stressed to much. If the detector is not configured probably it has a direct, highly negative, impact on the physics results.
The most efficient way to execute calibration runs, is if the shifter can get the DCS lock from the central DCS shifter. If not, the shifter will have to talk the central DCS shifter to go through the motions. Who, the shifter or central ECS shifter, executes the run is not important, as long as who ever does it selects the appropriate type of run.
N.B.: After executing a Pedestal Evaluation Run and/or Gain Evaluation Run, the detector must be configured for PHYSICS.
Here are the steps involved.
Here are the steps involved.
The main tool for monitoring the detector is the DCS UI. On the front panel, there are three buttons State Summary, Fec Summary, and Graphical Summary. Each will bring up an overview of all the detector that helps the shifter monitor the detector in a convenient way.
Clicking the State Summary button will bring up a window with the state matrix in it. This panel can be kept open while navigating the DCS UI.
Clicking the Fec Summary button will bring up a window with a large table that shows the values of the monitored temperatures, voltages, and currents. This panel allows the shifter to look one place only for this information.
Clicking any FEC name will bring up the panel for that FEC.
N.B.: When not in the state READY, the negative power supplies are not on, so one should not be alarmed that the columns IM2V, IM2VVA, M2V, and M2VVA are out of bounds. Furthermore, since the T1SENS and T2SENS depends on the negative power supply, they should not be consider either when not in the state READY. The image above shows the situation in STBY_CONFIGURED.
Finally, the button Graphical Overview brings up the window seen below.
Currently, the main application for monitoring the data on-line is the so-called PatternCalib display. It is a home-made application based on AliROOT which displays calibrated ADC signals in a 2D display.
First, one should copy the calibrations from the various LDCs to the DQM machine. Every time the calibrations are updated, i.e., a Pedestal evaluation or Gain evaluation run was taken, the new calibrations have to be copied over. To do so, select the Shifter menu in the FMDMenu and under the heading Monitoring select Copy Calibrations. Note, that there's no visual feed-back except that the FMDMenu is unresponsive.
Then select the Shifter Menu in FMDMenu and under the heading Monitoring select Pattern (calibrated).
After a while 3 windows will pop up.
A terminal shows you messages and errors. One canvas shows you the calibrated ADC spectra summed over all strips for the last event, and the second canvas shows the xy-hit distribution in 3 panels corresponding to 3 sub-detectors.
The left slider at the bottom of the display adjusts the ADC count lower and upper cut. The left side of the right slider adjusts the number noise values that are discounted in the signal processing.
The Continue steps to the next event as soon as it is available. The Start button will step through all events as they come in. Pause will pause the step through all events. Note, that it will not respond until the next event arrives. Redisplay forces a refresh of the displays.
In the histogram display are shown two histograms: One that contains all valid data, and one that contains the data that survived the defined cuts.
To stop the monitoring, select Quit ROOT in the File menu of one of the canvases.
The other kind of monitoring tool used, is the AMORE DQM. To start this, select Shifter->Monitoring->Start AMORE (do this first) in the FMDMenu. This will pop up a terminal in which it says it's starting the agent. When prompted to Hit return to continue ..., do so.
Next you need to start a client. Select Shifter->Monitoring->Start AMORE client in the FMDMenu. Two windows will appear: A tool-bar like window and a display with a selection tree.
Select any of the FMD histograms (1 for each ring) to monitor the ADC distribution.
This section needs to be filled in.
Information about clearing trips (infrastructure)
How to restore half-rings to a valid state
What to do in case of configuration problems.
Bad pedestal runs.
and so on ...
Person | Title | Phone | Contact for | |
---|---|---|---|---|
Jens Jørgen Gaardhøje | Project Leader | gardhoje@nbi.dk | +45 20 99 53 09 | Management issues, Run coordination |
Børge Svane Nielsen | Technical Leader | borge@nbi.dk | +41 76 48 74221 (164221) | Overall technical issues |
Hans Bøggild | boggild@nbi.dk | |||
Ian Bearden | Computing coordinator | bearden@nbi.dk | +45 31 32 53 23 | |
Kristjan Gulbrandsen | Shift coordinator | gulbrand@nbi.dk | +41 76 48 75724 (165724) | Shifts, DCS, DAQ, Cooling, Hardware, Shift guide |
Christian Holm Christensen | cholm@nbi.dk | +45 24 61 85 91 | DCS, DAQ, Offline, Monitoring, Hardware, Shift guide | |
Hans Hjersing Dalsgaard | canute@nbi.dk | +45 21 23 38 54 | Offline | |
Carsten Søgaard | soegaard@nbi.dk | +45 26 71 08 16 | ||
Casper Nygaard | cnygaard@nbi.dk | +45 27 12 55 18 |
The FMD system is consists of a number of components as outlined in the figure below.
The sensors are the active elements of the FMD. When a charge particle traverses the volume, it creates electron-hole pairs that induce a current on the out-put pads of the sensor. For this to happen, a reverse bias voltage must be applied to the sensors (see High Voltage).
The sensors are 320µm thick silicon, produced by Hamamatsu in Japan. There are two kinds of sensors: inner type sensors and outer type sensors. Both kinds of sensors are divided into two azimuthal sectors. Furthermore, each sector is divided into a number of radial strips: 512 for inner type sensors and 255 for outer type sensors.
The sensors are arranged into rings. An inner type ring consist of 10 sensors, and this has 20 segments in the azimuthal direction and 512 segments in the radial direction, giving a total of 10240 read-out elements. An outer type ring consist of 20 sensors, giving 40 segments in the azimuthal direction and 256 segments in the radial direction, which also comes to a total of 10240 read-out elements.
The three sub-detectors of the FMD, are built up of these kinds of rings. FMD1 (at z=320cm from the interaction point) has only 1 inner type ring. FMD2 (at z=83.4cm from the interaction point) has both an inner and outer ring. The last, FMD3 (at z=-62.8cm from the interaction point) consists of both and inner and outer type ring. Thus in total there are 5 rings, named FMD1i, FMD2i, FMD2o, FMD3i, and FMD3o.
The current signals from the sensors are very small and need to be amplified. A front-end electronics card, called the "hybrid", mounted directly on the sensors, take care of that (see Front-End Electronics).
The front-end electronics is composed of three parts: the hybrid cards, the digitizer cards, and the read-out controller unit.
These cards are mounted directly on the sensor, and hold a number of VA1 pre-amplifier and shaper ICs. There are two kinds of hybrid cards: The inner type that has 8 VA1s, and the outer kind that has 4 VA1s. Each VA1 is connected to 128 strips on the sensors, and the amplified signal from these strips are multiplexed into a single output line. The conglomerate of a sensor and a hybrid card is called a module.
Each ring — whether it is an inner or outer type is split into two half-rings. Each of these half-rings have one digitizer card mounted on the back of the honeycomb support plate that holds the modules. The main purpose of the FMDD cards is to digitize the analogue signals from the VA1s. The FMDD has 2 major components:
Each sub-detector has one associated RCU, which is connected the FMDDs of the sub-detectors half-rings. The main responsibility of the RCUs is to receive triggers from the CTP and to collect the data from the ALTROs on the FMDDs. It also facilitates communication with the ALTROs and BC of the connected FMDDs. The RCUs are situated just outside of the TPC, and are connected to the FMDDs via 3m long bus cables, to keep the irradiation down.
In the other end the RCU is connected the data acquisition farm via an optical fibre (known as the DDL) and through a daughter card (the DCSC) to the network of the DCS. The DDL is used to transfer data from the RCU to the acquisition system, while the Ethernet connection is used to control and monitor the RCU and associated FMDDs.
On the DCSC is an embedded core with Linux installed. A FmdFeeServer is running on that machine. This server provides monitoring information to the DCS, as well as control for configuring all of the front-end electronics.
The data collected by the RCU is sent over the DDL to an LDC. For the FMD there are three such LDCs: aldaqpc156 connected to FMD1, aldaqpc157 connected to FMD2, and aldaqpc158 connected to FMD3.
The LDC can recorded the data locally on disk, but more often is the data sent to a Global for event building. The GDC can then write the full events to PDS. The number and specific GDC is never fixed and can vary from run to run.
To upload pedestals for the pedestal subtraction filter in the ALTROs each of the LDCs run a PedConf daemon. This daemon reads the last processed pedestal data from a Pedestal Evaluation run and put that into the pedestal memory of each ALTRO channel. Note, that the PedConf daemons are controlled by the DCS — not the DAQ system.
On each LDC is also an optical link to the HLT cluster. The data received by the LDCs can be mirrored on this interface to allow the HLT to process the data.
The DAQ system also provided monitoring channels for on-line monitoring of the data, as well as quasi-automated data quality monitoring.
The FMD cannot control the DAQ in case of global runs. But for stand-alone runs, the FMD will control the DAQ.
As mentioned earlier, each sensor of the FMD needs a bias voltage to work as a detector. This bias voltage is supplied by a number of high-voltage cards situated in CR4 in the ALICE shaft. The cards are protected by interlocks from the DSS in case that the cooling plant fails.
The bias voltage supplied to the sensors depends on the type of the sensors. For inner type sensors it is 70V, while for the outer type it is 130V.
The detector control system is a conglomerate of many specific subsystems, ranging from the FmdFeeServer to cooling, from low-voltage to alarms. To easily control all these various subsystems a Finite State Machine (FSM) runs in DCS project of the FMD.
The FSM is coded to take care of all the steps involved in turning the detector on, preparing for data taking, monitoring the system, and of course turning the detector off again. The FSM is built up in hierarchal manner: At the bottom one finds state machines that control particular hard-ware devices, and as one moves up the hierarchy these are collected into logical units. A hardware device could be a low voltage channel, or an FMDD. A logical unit could corresponding to a half-ring with low/high-voltage, and FMDD daughters. The user interface of the DCS reflects this structure.
The DCS of the FMD is built upon the SCADA system PVSS. PVSS provides distributed project management, archiving (or logging), and so on.
The trigger system of ALICE is hierarchical. At the low level one finds the LTUs which distribute triggers to the detectors, and receives busy signals from the detectors. At the higher level one finds the CTP which processes trigger signals from detectors or other sources and makes decisions about what to do with these: distribute them or ignore them.
The CTP is under the control of the central shifters. But the LTUs can be controlled by the FMD shifter for stand-alone runs. One can configure the trigger rate, the trigger types, and so on.
Note, that each FMDD has its own busy output, which is fanned-in through an or gate to provide the busy seen by the LTU. The fan-in is under the control of the FMD and should always be configured appropriately.
All of the front-end electronics requires low-voltage power supplies to operate. The FMDD needs 3.3V, 2.5V, 1.5V, and -2V, while the RCU needs 4.3V and 3.3V (the FMDD distribute power to the hybrid cards and therefor they do not have separate power lines).
The low-voltage modules are situated in the pit on the upper gallery on the O-side. They are controlled via the mainframe in CR4 by DCS.
The FMD does not have its own cooling plant. Instead we leech of the TPC cooling plant. We can therefor not control the cooling of the detector. We have, however, installed flow-monitors on our lines and these are available and reacted upon in the DCS.
The DSS is a service provided by the LHC and ALICE. It has system for fire and smoke detection, power fall-outs, and cooling plant failures.
Below, we'll briefly look over a the panels of the FMD DCS UI.
This is the top-level panel that the shifter will mainly see. At the top is the FSM button and drop-down menu. On the top-left are 3 buttons corresponding to the global systems: Infrastructure, Run Object, and Run Configuration. At the far right is the Emergency Shut-down
In the centre is a graphical representation of the FMD. Placed close to each sub-detector are FSM buttons that shows the state of the sub-detectors.
At the bottom is a tabulated overview of the FMD state machine. States of all objects in the state machine is shown, and allows the shifter to quickly identify where a possible problem occurred. One can click any element in this table to open the corresponding panel. The legend on the right shows how to interpret the colours in the table. If you hover the cursor over an element, you will see a tool-tip text that tells you the name and state of the object.
This panel shows the overall state of the global infrastructure. There are three buttons: The low voltage control power supply, The high voltage interlock channel, and the power supply mainframe.
The 48V power supply powers the low-voltage crate in the pit. If this is not on, the one cannot control the low voltages supplied to the detector electronics. The power supply it self is situated in rack O24 on the upper left gallery in the pit.
The panel shows the load and connector voltages, currents, and power dissipation, status flags, and a trend of the output voltage and current, and temperature as a function of time.
This high-voltage channels output is in fact not connected to anything. It exists solely to ensure proper ramp down of the other high voltage channels. A hardware interlock from the cooling plant is connected to this channel. If the cooling plant trips, the interlock will disappear, and this channel will then ramp down the other high voltage channels. The channel is physically located in the CAEN crate in CR4.
The panel show the voltage and current, status words and a trend of the voltage and current. The most interesting thing here, is whether the channel is Tripped or not
The mainframe sits in CR4. It contains all the high voltage cards and a branch controller that communicates with low voltage power supplies in the pit.
The panel shows the status of the mainframe.
The image above shows the FMD2 sub-detector panel. The other two sub-detector panels are the same, except that FMD1 only has an inner ring.
At the top, is the familiar FSM button and drop-down menu. Below are two buttons showing the state of the cooling for that particular detector and the state of the RCU of that sub-detector.
Clicking on either of these two buttons will take you to the panel of the cooling and RCU respectively.
Below the two buttons are graphical displays of the state of the half-rings of the sub-detector. Again, clicking on these will take you to the relevant half-ring.
Again, there's the FSM button and drop-down menu of this RCU. Below are 5 buttons
Below this, are a number of tabs. The show various pieces of information about the front-end cards attached to the RCU. The information includes temperatures, voltages, and currents monitored by the front-end cards.
At the very bottom are 2 boxes showing where you can find more information about what's going on with the FeeServer and PedConf — the pedestal uploader.
This panel shows the state of the cooling of a sub-detector.
This panel shows the state of the RCU, the front-end cards that have been turned on, and a log from the FmdFeeServer running on the daughter DCSC board.
MiniConf is a daemon running on the Linux worker node (alifmdwn002). Upon request it configures the front-end electronics for data taking, pedestal extraction, or gain calibrations.
Below the FSM button and drop-down is shown the last configuration command executed by MiniConf.
Below that, are a number of tabs — one tab for each defined kind of configuration that MiniConf can do. Each tab contains a number of GUI elements that allow the experts to control how MiniConf will configure the front-end electronics. These elements are grayed out since the normal shifter is not allowed to change anything here.
The large table in the middle shows the log of the MiniConf execution. Problems will show up as read or yellow messages.
Pedconf are 3 daemons running on the LDCs in the DAQ network. Upon request they upload the latest pedestal data to the front-end for use in the baseline suppression filter.
Below the FSM button and drop-down is shown the last configuration command executed by Pedconf.
Below that, are a number of tabs — one tab for each defined kind of configuration that Pedconf can do. Each tab contains a number of GUI elements that allow the experts to control how Pedconf will configure the front-end electronics. These elements are grayed out since the normal shifter is not allowed to change anything here.
The large table in the middle shows the log of the Pedconf execution. Problems will show up as read or yellow messages.
This shows the load and connector voltages, currents, and power of the 3.3V power supply for the RCU. Also shown are status bits and a trends of the voltages and currents.
This shows the load and connector voltages, currents, and power of the 4.3V power supply for the RCU. Also shown are status bits and a trends of the voltages and currents.
On top is the familiar FSM button and drop-down menu. Below are 5 buttons — 4 for the power supplies and one for front-end card state.
Below is a graphical display of the bias voltage state, and the front-end card state.
The status of the 3.3V power supply for a digitizer card. It shows the voltage, current, and power at the target as well as at the connector. Also shown are status bits and a trend of the output voltage and current.
NB: Note that the voltage should be 4.3V (one volt over) than what the title says.
The status of the 2.5V power supply for a digitizer card. It shows the voltage, current, and power at the target as well as at the connector. Also shown are status bits and a trend of the output voltage and current.
NB: Note that the voltage should be 3.5V (one volt over) than what the title says.
The status of the 1.5V power supply for a digitizer card. It shows the voltage, current, and power at the target as well as at the connector. Also shown are status bits and a trend of the output voltage and current.
NB: Note that the voltage should be 2.5V (one volt over) than what the title says.
The status of the -2.0V power supply for a digitizer card. It shows the voltage, current, and power at the target as well as at the connector. Also shown are status bits and a trend of the output voltage and current.
NB: Note that the voltage should be 3.0V (one volt over) than what the title says.
NB: The scale of the trend, and the values displayed are positive. This should be interpreted as negative voltages, as the wires are connected with opposite polarity.
This panel shows the status of a high voltage channel.
Apart from the FSM button and drop-down menu this panel shows the current values and limits of the various monitored currents, voltages, and temperatures, as well as the error and interrupt state of the FEC.
The display is grouped to correspond with the interrupt bit mask shown near the top. Note, that if a bit is grayed out, it is not part of the active interrupt mask.
The screen-shot above shows the case for the expert user, who can change the limits. For normal shifters, the entries are grayed out and cannot be edited.
The run control unit is an object used by the central DCS operator to make sure the detectors are ready for taking data. This panel shows the state machine object that encapsulates the run control unit.
The run control unit is an object used by the central DCS operator to make sure the detectors are ready for taking data.
The run configuration tool allows the central DCS to configure all detectors for a particular kind of run. This shows the state machine object that encapsulate the run configuration tool
There are panels for most nodes of the FSM tree. Most of these are hardware panels, and they are of little use to the normal shifter.
Go to the page:
User name: | admin |
Password: | ***** |
ALICE cavern: |
|
||||||
CR4: | CAEN crate with High voltage cards and Low voltage branch driver (rack near entrance door on left). | ||||||
CR3: | JTAG board and engineering node. No access without DCS specialist | ||||||
CR1: | BusyBox and FMD-LDCs. No access without DAQ specialist. | ||||||
ALICE CONTROL ROOM (ACR): | FMD Console in first detector room | ||||||
TPC clean room: | Various tools and spare parts in cupboards on top-level. |
HEHI | ||
Office 1-R-0034 @ CERN: | +41 22 76 74 603 | |
Shift phone: | +41 76 48 75 991 | |
ACR | ||
Near FMD station: | +41 22 76 76 452 | |
Near TPC station: | +41 22 76 71 723 | |
Shift leader: | ||
DAQ | ||
Pierre Vande Vyvre | +41 22 76 78 336 | |
DCS | ||
Lennart Jirden: | +41 22 76 75 125 | 16-4459 |
Andre Augustinus: | +41 22 76 76 294 | 16-3534 |
Trigger | ||
Anton Jusko | +41 22 76 75 977 | 16-2090 |
Off-line | ||
Federico Carminati | +41 22 76 74 959 | 16-4843 |
Latchezar Betev | ||
RCU | ||
Luciano Musa: | +41 22 76 76 261 | 16-3119 |
Run coordinator: | ||
Paul Kuijer | +41 22 76 75 466 | 16-5700 |
Jan Rak | +41 22 76 79 732 | 16-5757 |
Technical coordinator: | ||
Lars Leistam: | +41 22 76 73 920 | 16-0551 |
Werner Riegler: | +41 22 76 77 585 | +41 76 48 72 986 |
Spokesperson: | ||
Jurgen Schukraft: | +41 22 76 75 955 | 16-4544 |
LHC Main control room: | +41 22 76 76 922 | |
Emergency: | +41 22 76 74 444 | 112 |
Taxi: | ||
Switzerland | +41 22 32 02 202 | |
CERN Main switchboard | +41 22 76 76 111 |