Summary

This page documents the steps I took in looking into setting up an all-sky, zero-spindown, 20-500 Hz search in O1 [EDIT: We are now considering 80-500 Hz instead]. There is some interest in looking for signals that are roughly zero-spindown (e.g., from the annihilation of ultralight axions or bosons in general in clouds around black holes that formed via superradiance). The question I am looking to answer is, How much better could we do with a quick E@H search (e.g., 1 month) than the currently available all-sky searches [EDIT: We are now considering 4-5 months]? Ultimately, Is this search worth it?

At the same time, this is the first time I am setting up a search, so this is my first time using Sinéad's scripts. Therefore, this also serves as a low-level tutorial for how to use her scripts.

Sinéad's documentation can be found at this page. I'm not trying to reproduce that documentation; rather, I am going through step by step and noting down, e.g., the parts that weren't obvious to me where I had to ask her for a question.

A particular thing about a zero-spindown search: In an all-sky search with spindown, there is a close connection between fdot and sky position; that is, a signal with a mismatch in sky will still be found but at a slightly different spindown, and vice versa. Since we no longer have spindown in the search, we can no longer rely on this happening, so we will have to have a smaller skygrid mismatch than the equivalent non-zero spindown searches.

Note that this page will document everything I do sequentially. Since I will likely have to iterate over everything, this is not the right place to look for an explanation of the final setup. Instead, this is more like a long-winded tutorial, and includes how to get a sense for what a "reasonable" setup is.

Setting up the scripts

The scripts are in the gitlab repository called sinead.walsh/search_setup_optimisation. I cloned the repository while on atlas, since I will eventually be doing things like submitting condor jobs, and it's easier to have everything in one place. Since this was my first time cloning on atlas, I had to generate a new ssh key (in the gitlab browser interface, go to "Settings" a.k.a. User Settings, then "SSH Keys," and follow the instructions there), then do "git clone git@gitlab.aei.uni-hannover.de:sinead.walsh/search_setup_optimisation.git". The scripts are written in python. You'll also need a set of modules that live in another of Sinéad's repositories. If you are on atlas, all you have to do is add the following line to your .bash_profile (or equivalent): export PYTHONPATH=$PYTHONPATH:/home/sinead.walsh/Code/Modules:/home/sinead.walsh/Documents/Papers/houghgctcompare/modules. Otherwise, you'll have to clone the houghgctcompare repo as well, and modify the python path accordingly. (I think you only need the second part of the path --- i.e., the call to houghgctcompare --- and not the first one.) Don't forget to do "source ~/.bash_profile" (or the equivalent) before using the scripts. [EDIT: Now that we have left the LSC whereas Sinéad is technically still part of it for now, I have copied the relevant directories to my own workspace just in case]

Making the segment lists

For a semicoherent search, given a set of SFTs, we will want to define search setups with different coherent times and therefore different numbers of segments. Since I am running an all-sky search in O1, I will try the Tcoh value of 210.0 hrs from that search, as well as test out a few other ones (150, 180, 240, 270). I want a different segment list for each value of Tcoh, so that I will have four setups to compare. Note that it's difficult to tell right now what kind of value would be appropriate; the originally O1 all-sky search only went to 100 Hz whereas I'm going up to 500 Hz, which would suggest shorter Tcohs, BUT I have removed an entire dimension of parameter space (fdot), which would allow for longer Tcohs (at least, when comparing computational cost). It's unclear which effect will win out.

The necessary script is data_creation/make_segment_list_v3.py. I made a few changes to the script itself:
  • Note that the O1AS SFT timestamps and O1MD timestamps are different; for O1AS we had 4744 SFTs (H1 and L1 combined) whereas for O1MD we had 6287. This is because, for O1AS, whenever we encountered an SFT with a transient disturbance we discarded the SFT altogether, whereas for O1MD we simply zeroed out the disturbance and still kept the SFT.
  • I added the variable "plotID" (which shows up for some of the other examples), which (I think) is the start of the name for the figures that the script outputs. I also added exclude_hardcoded_epochs = 0 which is an option that is not needed for this search (it is being used in the O2AS search).
  • I changed plot_title_ID to "O1" (this is the title that will be printed on the plot, which does not have to be the same as the filename of the figure).
  • I made a directory called segmentLists in the overall repo directory (i.e., the dir that contains data_creation).
  • In the version I am using, right above avail_sfts_h1 and avail_sfts_l1, there is a note that "S6 requires [0]. O1, O2 don't." This is to take into account the fact that sometimes timestamp list files have two columns and sometimes they have one. However, this doesn't need to be hardcoded, so I changed the line to avail_sfts_h1 = np.sort(np.genfromtxt(tsfiles[0],usecols=0)) and the equivalent for L1. Note that np.genfromtxt() is perfectly fine with usecols=0 for a one-column file but np.loadtxt() doesn't like it.

I think that's it for the things I changed. The outputs are two figures in data_creation/segment_list_checks and a segment list in segmentLists. (If you're wondering how to view figures while ssh'd, you need to use the command ssh -Y instead of just ssh, and then you can view an image file by using display, for instance.) Note: because of the way the script is currently written, you'll also have to do ssh -Y to actually run make_segment_list_v3.py if you want the plots to be made. If you just want to print out the information (number of segments, number of SFTs, etc.) you don't need the -Y.

Here are some numbers for the search setups I've tried. Two notes:
  • Note that the refTime here is different from the refTime in the O1MD search; this is because the first segment in O1MD starts a bit later than the very first 1800-s SFT. Does this reflect something I should be doing as well?
  • There are two "Tobs" values printed out; the first says "Tobs (not always avail sfts)" and the second says "Tobs from avail sfts." I think the first one calculates the Tobs assuming you have enough SFTs to cover the all the segments while the second one takes into account the fact that the last SFT timestamp can be before the last SFT that could be included in the last segment, given the coherent length. Tobs in the table below prints out the second one.
The quotations in the table below mean they are the "normal" ones, i.e., the ones that are in general the same. The setup with Tcoh=210 is the different one, so the quotations for Tcoh=240 do not mean that these values are the same as for Tcoh=210 but rather that they are the same as for Tcoh=150, 180, etc.
Tcoh (h) nsft Nseg Tcoh (s) Tobs refTime tMin tMax
150 6287 20 540000 10628462 1131937856 1126623625 1137250287
180 " 17 648000 " " " "
210** " 15 756000 ?? ?? " ??
240 " 13 864000 " " " "
270 " 11 972000 " " " "
** This has a note that says "WARNING: nsfts < 5 in h1 or l1 in last segment. ? Setup to handle this, 4 3" and "1 segments skipped 7 sfts lost. remaining 6280." i.e., I think this means that the total time did not divide well by 210, and had a leftover segment of very few SFTs that has been dropped. I think this is OK? Note for Sinéad: When this happened, the script ended with an error ("Image data can not convert to float") and did not plot anything. Note for Sinéad: It is still printing out the same Tobs, refTime, and tMax as before but that can't be right since the last SFT was dropped.

Finding search setups

The segment lists live in the output directory in their respective sub-directories. I then wanted to run find_valid_setups.py, which estimates the runtimes and other numbers for the setups you are interested in given details about the search setup. The main parameters I am interested in iterating over are the mismatches; since I have only frequency and sky, I give it a handful of different frequency and sky mismatches and the script returned the estimated runtime.

The script is run using python find_valid_setups.py where some sample ini files live in the ini_files directory. The sections of the ini file are DEFAULT, GENERAL, and (at minimum) two sections pertaining to the search setup you are interested in exploring. Here are some things I have changed:
  • I renamed the last two subsections to GCT9m210h100Hz and 9m210h. The latter name is obvious; I am interested in a search with Tcoh = 210 h, using 9 months of data. The former name simply adds a few more pieces of information; the name of the search code (GCT) is prepended because I might be interested in using another search code (i.e., Weave) as well, and 100Hz comes from the fact that ultimately I will be testing things using a set of injections at a given frequency (of my choice). The assumption is that injection and recovery studies at any one frequency is representative of the entire search frequency. Note that, because I renamed the search sections in this way, I also need to tell the script where to look, so I have changed the variable timingSection to be 9m210h. The script will then automatically look for the section GCT9m210h100Hz based on my other selections, e.g., skygridf0ofInj and method (in the General section). The place to change this name is searchGridSection in the General section of the ini file.
  • Because I want to search only a single spindown value (i.e., spindown=0), I set search_fixed_spindown = 1. (There is a comment that "its implemented in find valid setups but not submit. It hasn't been tested in find valid setups." I will worry about this in the next step but for now I just want to figure out the setups to use.) For the same reason, I set f1BandPatch = 0. I don't remember what f0BandPatch was before but in my current version it is 0.05 (I don't think this really matters that much, as per the comment above it).
  • I kept Nnodes, tauG0, tauGF0resamp, and tauBayes the same. I checked with Sinéad, and she said that the ones in the example are all the most updated values based on Jing and Reinhard's findings.
  • I changed some values in the next section and added some more options. My search goes from 20 to 500 (minf0FullSearch and maxf0FullSearch). I also set f0BandFullSearch but looking at the code I think it calculates this automatically anyway. I set both f1BandFullSearch and minF1FullSearch to be zero. I kept allskyFullSearch = 1 and skygrid_step_freq = 5. I know that in the past we've also changed the skygrid every 10 Hz, and I'm not sure what effect (if any) a zero spindown search has on this, so this is something I should ask about further.
  • In the GCT9m210h100Hz section, I kept the skyBandPatchRadius the same (the comment says "only used by Weave for runtime measurement" but I think this means "Weave uses this only for runtime measurement" and not that "only Weave uses this"). I asked Sinéad about what mismatches were appropriate; she said that, in the past, she has found mismatches of around 0.3 in frequency were ok, and in the O1 all-sky search, the skygrid mismatch at stage 0 was 1e-3. Since I only have frequency (i.e., I don't have to consider any potential mismatch in spindown), she said I can/should aim for lower mismatches in frequency. I am going to aim for skygrid mismatches of around 1e-3.
  • In the 9m210h section, I just changed the values to the relevant ones. These are all printed out by make_segment_list_v3.py when making the segments.

After running find_valid_setups.py, I get (among other things) a file called valid_setups_walk0.txt (or a different number) in the relevant subdirectory within the output directory. This has a list of all the different setups for the different combinations of mismatches I asked the script to look at. The second to last column, tauTotal_months, gives the estimated Einstein@Home runtime in months (or EM). I am aiming for a search that will take around 1 EM. My concern is that, since this is a zero spindown search, it is difficult to gauge what sort of mismatches and Tcohs are appropriate. In any case, here is a table with some potential search setups that give estimated runtimes of around 1 EM. These are all using the same set of SFTs (4744) with the same total Tobs (9e6 s) with the Bayesian statistics (BSGL, etc.) included, using resampling. The set of frequency mismatches I asked it to look over are [0.4, 0.3, 0.2, 0.1] and the set of skygrid mismatches are [1e-3, 7e-4, 5e-4, 2e-4, 1e-4]. Based on conversations with Sinéad, I should be more concerned with minimizing the skygrid mismatch than the frequency mismatch, so I won't be testing all of the following setups, but here are the potentially relevant ones for posterity.

Note that, because of the issues mentioned above about the Tcoh=210 setup printout, the numbers below aren't 100% trustworthy for Tcoh=210 but then again this is all ballpark anyway.
  Tcoh freq mismatch sky mismatch dfreq (Hz) dtheta_patch Ntemplates runtime (EM)    
  150 0.4 7e-4 1.3e-6 4.0e-3 1.5e15 0.72    
  150 0.4 5e-4 1.3e-6 3.3e-3 2.1e15 1.00    
  150 0.3 7e-4 1.1e-6 4.0e-3 1.8e15 0.83    
  150 0.3 5e-4 1.1e-6 3.3e-3 2.5e15 1.16    
  150 0.2 1e-3 9.1e-7 4.7e-3 1.5e15 0.71    
  150 0.2 7e-4 9.1e-7 4.0e-3 2.1e15 1.01    
  150 0.2 5e-4 9.1e-7 3.3e-3 3.0e15 1.42    
  150 0.1 1e-3 6.5e-7 4.7e-3 2.1e15 1.00    
  150 0.1 7e-4 6.5e-7 4.0e-3 3.0e15 1.43    
  180 0.4 7e-4 1.1e-6 4.0e-3 1.8e15 0.73    
  180 0.4 5e-4 1.1e-6 3.3e-3 2.5e15 1.02    
  180 0.3 7e-4 9.3e-7 4.0e-3 2.1e15 0.84    
  180 0.3 5e-4 9.3e-7 3.3e-3 2.9e15 1.18    
  180 0.2 1e-3 7.6e-7 4.7e-3 1.8e15 0.72    
  180 0.2 7e-4 7.6e-7 4.0e-3 2.6e15 1.03    
  180 0.2 5e-4 7.6e-7 3.3e-3 3.6e15 1.44    
  180 0.1 1e-3 5.4e-7 4.7e-3 2.5e15 1.02    
  180 0.1 7e-4 5.4e-7 4.0e-3 3.6e15 1.46    
  210 0.4 7e-4 9.2e-7 4.0e-3 2.1e15 0.75    
  210 0.4 5e-4 9.2e-7 3.3e-3 3.0e15 1.05    
  210 0.3 7e-4 8.0e-7 4.0e-3 2.5e15 0.87    
  210 0.3 5e-4 8.0e-7 3.3e-3 3.4e15 1.21    
  210 0.2 1e-3 6.5e-7 4.7e-3 2.1e15 0.74    
  210 0.2 7e-4 6.5e-7 4.0e-3 3.0e15 1.06    
  210 0.1 1e-3 4.6e-7 4.7e-3 3.0e15 1.05    
  240 0.4 7e-4 8.1e-7 4.0e-3 2.4e15 0.74    
  240 0.4 5e-4 8.1e-7 3.3e-3 3.4e15 1.04    
  240 0.3 7e-4 7.0e-7 4.0e-3 2.8e15 0.86    
  240 0.3 5e-4 7.0e-7 3.3e-3 3.9e15 1.20    
  240 0.2 1e-3 5.7e-7 4.7e-3 2.4e15 0.74    
  240 0.2 7e-4 5.7e-7 4.0e-3 3.4e15 1.05    
  240 0.2 5e-4 5.7e-7 3.3e-3 4.8e15 1.47    
  240 0.1 1e-3 4.0e-7 4.7e-3 3.4e15 1.04    
  240 0.1 7e-4 4.0e-7 4.0e-3 4.9e15 1.49    
  270 0.4 7e-4 7.2e-7 4.0e-3 2.7e15 0.71    
  270 0.4 5e-4 7.2e-7 3.3e-3 3.8e15 0.99    
  270 0.3 7e-4 6.2e-7 4.0e-3 3.2e15 0.82    
  270 0.3 5e-4 6.2e-7 3.3e-3 4.4e15 1.14    
  270 0.2 1e-3 5.1e-7 4.7e-3 2.7e15 0.70    
  270 0.2 7e-4 5.1e-7 4.0e-3 3.9e15 1.00    
  270 0.2 5e-4 5.1e-7 3.3e-3 5.4e15 1.40    
  270 0.1 1e-3 3.6e-7 4.7e-3 3.8e15 0.99    
  270 0.1 7e-4 3.6e-7 4.0e-3 5.5e15 1.41    
At this point I am going to assume that these Tcoh values are suitable. From Sinéad's wiki page: "Different setups are usually compared using the average mismatch per setup or the efficiency at fixed false alarm rate. The former can only be used to compare setups at fixed Tcoh, the latter can be used to compare setups across Tcoh or at fixed Tcoh." So as a first step, I can just do (signal-only) injections and compare the different measured mismatch distributions across the setups, for each value of Tcoh. I just need enough injections to get a sense for the mismatch distribution, so, 100-200 should be enough.

As a reminder, because I don't have any fdot in my search, I'm going to want to focus on reducing the skygrid mismatch.

Comparing search setups with the same Tcoh

For each Tcoh, I want to pick the setup (i.e., the one with the lowest avg mismatch) out of the set of ones identified above. I have decided to make 200 signal-only injections; for this step, I have used my own scripts to make the injections and to run the searches. However, I am using Sinéad's script for creating the skypatches, since this is slightly non-trivial.

My injections are on atlas at /home/sylvia.zhu/O1/O1AS_zeroSD/setup/injections_signalOnly and split into 200 directories. The injections have frequencies randomly chosen between 100 and 105 Hz; zero spindowns; randomly chosen to be uniform on the sky; cover a small range in h0; and are uniformly distributed across the other nuisance parameters.

I used Sinéad's generate_skypatches_MCs_randstart.py as a standalone script to create the sky patches. (Note: In a future step, I will be using this script as part of the whole set of scripts for creating and searching injections, but since this is my first time and this is an important step I decided to do things more by hand for now.) To call the script, do python generate_skypatches_MCs_randstart.py fmax dskypatch patch_centers skyPatchID mismatch. The inputs are the following:
  • fmax is the largest frequency that will be using this particular skygrid that is defining how we're drawing our skypatches. We want skygrids that have roughly the same ratio of ????? ...
  • dskypatch is the radius of the skypatch, in ecliptic coordinates, in radians. Too small and we don't recover the signal, too big and we use time/resources we don't need to. Sinéad said that, in general, she makes sure to include around 30 to 50 skygrid points in a skypatch. Since this is a slightly different search and we don't have a spindown, I am aiming for 50-60 points. The spacing between the points is dtheta_patch, which was determined from the sky mismatch that I told find_valid_setups.py to consider. Note that generate_skypatches will print out the number of skygrid points for each generated skypatch.
  • patch_centers is a file containing, at the very least, the sky positions of the injections (as well as some sort of index for the injections). Each skypatch is centered around the injection sky position, with a small randomized offset. The alpha and delta for the injections should be in separate columns that can be read in by np.loadtxt().
  • skyPatchID is only used in the output file name for keeping track (e.g., if you want to specify a date of creation).
  • mismatch is the mismatch in sky.

I made a few small changes. Some are because I was using the script as a standalone, and others are to get around hardcoded parameters.
  • The variable output_dir determines where the skypatches are saved to, and is hardcoded to correspond to Sinéad's directory structure (i.e., that she has her scripts in a directory called Code, which I do not). I changed this to correspond to my own.
  • (This is not a change but just a note) There is a comment here that says FIXME next to search_area_reduction but I don't think this is ever used.
  • Because I was using my own files for injection parameters, the columns for alpha and delta don't match the columns that are hardcoded in the script. After the patch_centers file is read in, there are a few lines that assume column numbers for ra, dec, and injID; I changed these so that they would correspond to the ones in my file.
Based on the input parameters, it can be seen that for a given injection, the skygrid only depends on the mismatch in the sky. Here are the different skypatches used for one injection (three because I tested a total of three different sky mismatches):
Screen_Shot_2018-07-16_at_14.59.48.png Screen_Shot_2018-07-16_at_14.59.04.png Screen_Shot_2018-07-16_at_14.58.30.png
In all of these, the X at the center marks the injection sky position and the circles mark the skygrid points included in the skypatch. The three plots show increasing sky mismatches, which is why the spacing in between the points gets larger (the axis limits are the same in each). As can be seen, Sinéad's code puts in a small offset from the actual sky position of the injection.

I then ran HierarchSearchGCT using my own scripts to get the loudest candidate for each search setup. I have five different values of Tcoh, and for each value I have 9 setups, so I will have five sets of nine histograms each. In addition to these gridded searches around each injection, I also want the 2F at the exact match in order to obtain the mismatch distribution. Roughly speaking, for a given injection, the total time for these 45 searches was about an hour, so each search took a little over a minute; this is because the searches only contained ~50 skygrid points and 100 freq points.

However, here is where I ran into the first indication that I am not exploring the right setups (explained in next section).

Interpreting the mismatch distributions and signal-only search results

I looked at 5 different Tcoh values and plotted the mismatch distributions for each Tcoh:
mmHist_v0_Tcoh150.png mmHist_v0_Tcoh180.png
mmHist_v0_Tcoh210.png mmHist_v0_Tcoh240.png
mmHist_v0_Tcoh270.png  
The main thing to notice is that the distributions are all peaked around 0.6-0.8 and averages around 0.7, which is a bit higher than we normally aim for (in O1AS the measured mismatch was closer to 0.6).

The measured mismatch is increasing slightly with longer Tcoh. This is something that Sinéad confirmed generally happens; with increasing Tcoh, it becomes easier for a template and an underlying signal to go out of phase, thereby increasing the mismatch. In general, longer Tcoh is only better if we can compensate by spending more computational resources.

There might other trends as well (e.g. with freq spacing) that I should make sure to check later. For now, I just wanted to see if this was ballpark OK, and there might be indications that I should be looking elsewhere ... e.g., here is an example of one particular injection that as far as I can tell is fairly representative (for posterity, this is inj60, Tcoh=210h, mfreq=0.1, msky=0.001):

Screen_Shot_2018-07-16_at_17.43.45.png Screen_Shot_2018-07-16_at_17.56.07.png

In the first figure, the loudest candidate is roughly 1e-5 Hz from the true parameters, which is roughly 25 freq gridpoints away. In the second figure, the loudest candidate (marked with a larger, green circle) is one of the closest ones to the true position (red cross). In fact, I checked a handful of different injections and the recovered loudest candidates were all from the one closest to the injection. This, coupled with the high mismatch, suggests that I could be going lower in sky mismatch. Recall that I chose mine to be at most half as small as the sky mismatch in O1AS, and that search included spindown; since I am no longer including spindown, I should be going down in sky mismatch even more. Sinéad has said that, previously, the loudest recovered tended to be within a few skygrid points (hence why she searches 30-50) but that it generally wasn't the closest one.

Finding valid setups: Take 2

With this in mind, I am rerunning the above steps with smaller skygrid mismatch. To compensate, I am also going shorter in Tcoh, and will use Tcoh=150h as my longest coherent time. We know that O1AS had Tcoh=210 h, but that only went to 100 Hz; in contrast, S6Bucket (which had a similar freq range) had Tcoh=60h, so this is the shortest Tcoh I'm going to look at.

Creating segments:
Tcoh (h) nsft Nseg Tcoh (s) Tobs refTime tMin tMax
150 6287 20 540000 10628462 1131937856 1126623625 1137250287
120 " 25 432000 " " " "
90 " 33 324000 " " " "
60 " 49 216000 " " " "
Finding search setups with appropriate estimated computational times. Note: I am going to increase my computational budget to 2 EM, given that the minimum runtime on E@H should be around 6 weeks, considering practicalities.

Here, I am looking for skygrid mismatches of 5e-4 and less. I went much higher in freq mismatch, up to 5. Note: According to Sinéad, she thinks of the input frequency and skygrid mismatches more as normalization factors than actual physical mismatches; so, e.g., a mismatch of >1 is not physically impossible. I'm testing a high freq mismatch to see just how much this matters.
# Tcoh freq mismatch sky mismatch dfreq dtheta_patch Ntemplates runtime (EM)
6 150 5.0 8e-5 4.6e-6 1.3e-3 3.8e15 1.77
7 150 5.0 7e-5 4.6e-6 1.2e-3 4.3e15 2.02
8 150 5.0 6e-5 4.6e-6 1.1e-3 5.0e15 2.36
13 150 2.0 1e-4 2.9e-6 1.5e-3 4.8e15 2.24
14 150 2.0 9e-5 2.9e-6 1.4e-3 5.3e15 2.48
21 150 1.0 2e-4 2.0e-6 2.1e-3 3.4e15 1.58
30 150 0.7 2e-4 1.7e-6 2.1e-3 4.0e15 1.89
39 150 0.6 2e-4 1.6e-6 2.1e-3 4.3e15 2.05
47 150 0.5 3e-4 1.4e-6 2.6e-3 3.2e15 1.49
48 150 0.5 2e-4 1.4e-6 2.1e-3 4.7e15 2.24
56 150 0.4 3e-4 1.3e-6 2.6e-3 3.5e15 1.67
57 150 0.4 2e-4 1.3e-6 2.1e-3 5.3e15 2.51
64 150 0.3 4e-4 1.1e-6 3.0e-3 3.0e15 1.45
65 150 0.3 3e-4 1.1e-6 2.6e-3 4.1e15 1.93
               
6 120 5.0 8e-5 5.7e-6 1.3e-3 3.0e15 1.77
7 120 5.0 7e-5 5.7e-6 1.2e-3 3.4e15 2.02
8 120 5.0 6e-5 5.7e-6 1.1e-3 4.0e15 2.36
13 120 2.0 1e-4 3.6e-6 1.5e-3 3.8e15 2.24
14 120 2.0 9e-5 3.6e-6 1.4e-3 4.2e15 2.49
21 120 1.0 2e-4 2.6e-6 2.1e-3 2.7e15 1.59
30 120 0.7 2e-4 2.1e-6 2.1e-3 3.2e15 1.89
39 120 0.6 2e-4 2.0e-6 2.1e-3 3.5e15 2.05
47 120 0.5 3e-4 1.8e-6 2.6e-3 2.5e15 1.49
48 120 0.5 2e-4 1.8e-6 2.1e-3 3.8e15 2.24
56 120 0.4 3e-4 1.6e-6 2.6e-3 2.8e15 1.67
57 120 0.4 2e-4 1.6e-6 2.1e-3 4.2e15 2.51
64 120 0.3 4e-4 1.4e-6 3.0e-3 2.5e15 1.45
65 120 0.3 3e-4 1.4e-6 2.6e-3 3.3e15 1.93
               
5 90 5.0 9e-5 7.6e-6 1.4e-3 2.0e15 1.56
6 90 5.0 8e-5 7.6e-6 1.3e-3 2.3e15 1.76
7 90 5.0 7e-5 7.6e-6 1.2e-3 2.6e15 2.01
8 90 5.0 6e-5 7.6e-6 1.1e-3 3.0e15 2.34
13 90 2.0 1e-4 4.8e-6 1.5e-3 2.8e15 2.22
14 90 2.0 9e-5 4.8e-6 1.4e-3 3.2e15 2.47
21 90 1.0 2e-4 3.4e-6 2.1e-3 2.0e15 1.57
30 90 0.7 2e-4 2.8e-6 2.1e-3 2.4e15 1.88
39 90 0.6 2e-4 2.6e-6 2.1e-3 2.6e15 2.02
47 90 0.5 2e-4 2.4e-6 2.6e-3 1.9e15 1.48
48 90 0.5 2e-4 2.4e-6 2.1e-3 2.8e15 2.22
56 90 0.4 3e-4 2.2e-6 2.6e-3 2.1e15 1.66
57 90 0.4 2e-4 2.2e-6 2.1e-3 3.2e15 2.48
64 90 0.3 4e-4 1.9e-6 3.0e-3 1.8e15 1.43
65 90 0.3 3e-4 1.9e-6 2.6e-3 2.5e15 1.91
               
5 60 5.0 9e-5 1.1e-5 1.4e-3 1.3e15 1.55
6 60 5.0 8e-5 1.1e-5 1.3e-3 1.5e15 1.74
7 60 5.0 7e-5 1.1e-5 1.2e-3 1.7e15 1.99
8 60 5.0 6e-5 1.1e-5 1.1e-3 2.0e15 2.32
13 60 2.0 1e-4 7.2e-6 1.5e-3 1.9e15 2.20
14 60 2.0 9e-5 7.2e-6 1.4e-3 2.1e15 2.44
21 60 1.0 2e-4 5.1e-6 2.1e-3 1.3e15 1.56
30 60 0.7 2e-4 4.3e-6 2.1e-3 1.6e15 1.86
39 60 0.6 2e-4 4.0e-6 2.1e-3 1.7e15 2.01
47 60 0.5 3e-4 3.6e-6 2.6e-3 1.3e15 1.47
48 60 0.5 2e-4 3.6e-6 2.1e-3 1.9e15 2.20
56 60 0.4 3e-4 3.2e-6 2.6e-3 1.4e15 1.64
57 60 0.4 2e-4 3.2e-6 2.1e-3 2.1e15 2.46
65 60 0.3 3e-4 2.8e-6 2.6e-3 1.6e15 1.89
66 60 0.2 2e-4 2.8e-6 2.1e-3 2.5e15 2.84
               

I am now making new sets of skypatch files for my 200 injections. The sky mismatches are [6e-5, 7e-5, 8e-5, 9e-5, 1e-4, 2e-4, 3e-4, 4e-4]. Some information about the skygrid files:
mmsky radius avg npoints
6e-5 0.0040 48
7e-5 0.0045 53
8e-5 0.0050 57
9e-5 0.0050 51
1e-4 0.0053 51
2e-4 0.0080 59
3e-4 0.0100 62
4e-4 0.0110 56
Now for the mismatch distributions. First, I plotted the mismatch distributions for mfreq>=1 and mfreq<1 separately. Here is are the examples for Tcoh = 60:

mmHist_v1_Tcoh60_mfreqGE1.png

mmHist_v1_Tcoh60_mfreqLT1.png

And the handful of setups with smallest averaged measured mismatch:

mmHist_v1_Tcoh60_best.png

Based on these three plots (and assuming Tcoh=60 is representative, which as far as I can tell is true), in general the setups with mfreq>1 are not worth it except for the combination of (mfreq 2, msky 9e-5). The other way of getting a low average measured mismatch is to have a setup with (mfreq <1, msky 2e-4). In the third plot, we see that (mfreq 2, msky 9e-5) has a competitive m_avg but it has a wider range of measured mismatches than the others, having both more values close to zero as well as going up to higher mismatches.

Here are the sets of best setups for the four Tcoh values I am focusing on now (I'm not showing the other non-best setups here but the histograms are attached to this page):
Tcoh 60h mmHist_v1_Tcoh60_best.png
Tcoh 90h mmHist_v1_Tcoh90_best.png
Tcoh 120h mmHist_v1_Tcoh120_best.png
Tcoh 150h mmHist_v1_Tcoh150_best.png
For each Tcoh, the best setup (in terms of lowest measured mismatch) is one of:
  • A: msky 2e-4 and the lowest mfreq that still gives an acceptable estimated runtime
  • B: msky 9e-5 and mfreq 2.0
I'm going to choose both of these setups for each Tcoh, giving me a total of 8 setups. In general, if mfreq=0.3 is available (such as for Tcoh=60) the type A setup is better in terms of measured mismatch (average AND range) but if mfreq=0.4 is the smallest possible freq mismatch (the other Tcohs) then sometimes the type B setup is actually better (e.g. this is really evident for Tcoh 120). In any case, the disributions are similar, so I am going to move all 8 setups forward.

Efficiency at fixed false alarm rate

As noted before, in general the mismatch increases with increasing Tcoh, which is expected. However, 2F for a signal (or equivalently, SNR) also increases with increasing Tcoh. So, to compare all the setups with the different Tcoh values, I need to determine the efficiency vs h0; i.e., the fraction of signals at a given h0 that are detectable. There is a spread here partly because h0 is only one factor that determines how loud we find a signal to be and partly because noise fluctuations will have an effect. So, of course, I will be making injections in noise.

The question of whether or not a signal is "detectable" means we have to decide on a detection threshold. We decide what this threshold is based on the number of false alarms we are willing to accept (that is, candidates that are higher than our threshold but that are in fact noise fluctuations). Sinéad said that she usually looks at nFA = 1 or 0.1, but since the result we want out of this is a comparison across setups (and not an absolute measure of sensitivity depth for a given setup), she doesn't expect the results of the comparison to change very much for reasonable nFAs. I am moving forward with nFA = 1.

Below are a set of figures; each row corresponds to a different setup, for 8 rows in total. The left column shows the expected distribution of 2F for nTemplates trials (where nTemplates is the total number of templates that will be included in the search, based on the output of find_valid_setups.py) and the right column shows a plot of nFA vs 2F. In both columns, the vertical dotted line marks the 2F value that gives nFA = 1 (and nFA = 1 is also marked by the horizontal dotted line).

2FsInGaussNoise_best.png

The numbers listed here as 2F_thr will be the detection thresholds for determining the efficiency.

Some thoughts (not crucial, just jotting things down):
  • We compute these false alarm probabilities in Gaussian noise using a few assumptions: 1) that our noise is stationary and Gaussian (well there's not much we can do about this so it's kind of trivial) and more importantly 2) that either the templates are completely independent OR that the degree of independence is the same for all the setups. In general, for E@H searches, when we make calculations based on the assumption of stationary Gaussian noise (e.g., when we calculate a candidate's critical ratio), we assume that the degree of independence is constant for a search. As we have seen, this assumption tends to break down if the shape of our parameter space changes rapidly.
  • But, in looking at the efficiency at fixed false alarm rate, we are making the assumption that we can calculate the false alarm rate for a given setup accurately, OR at the very least that (as before) the degree of independence is roughly the same. This second assumption is probably generally safe for a standard CW search with ~three search dimensions (freq, fdot, and either f2dot or sky position) but now I only have two search dimensions, freq and sky position. In addition, because I no longer have fdot, I am requiring finer sky grids. Both of these will reduce the overall independence of templates ... are they reduced by the same amount? ...
A few things MAP pointed out:
  • Increasing Tcoh (while keeping computational cost the same) generally means increasing mismatch, but it also means increasing SNR for a signal. So, e.g., in going from Tcoh=60 to Tcoh=240, the SNR of a signal goes as the square root of Tcoh, so the SNR would increase by a factor of 2. At the same time, the average measured mismatch I'm seeing is increasing by a factor of 2 which drops the SNR by a factor of 2, and these two effects in this case cancel each other out.
  • But, previously I was looking at the longer Tcoh setups assuming a budget of 1EM, whereas I should be assuming 2, so in the end I suspect the longer Tcohs will be better.
  • I don't necessarily want to assume nFA = 1; we will be following up candidates, like we did in O1AS, so I should set nFA based on some chosen number of candidates to follow up. More on this later.
So with the first two points in mind, I am going to take another look at the setups with longer Tcohs.

Finding valid setups: Take 2a

I am redoing the find_valid_setups.py steps for Tcoh=180, 210, 240, 270 and with a bigger computational budget in mind. Again, I want to focus on having a small msky.
# Tcoh freq mismatch sky mismatch dfreq dtheta_patch Ntemplates runtime (EM)
7 180 5.0 7e-5 3.8e-6 1.2e-3 5.1e15 2.06
8 180 5.0 6e-5 3.8e-6 1.1e-3 6.0e15 2.41
13 180 2.0 1e-4 2.4e-6 1.5e-3 5.7e15 2.28
21 180 1.0 2e-4 1.7e-6 2.1e-3 4.0e15 1.62
30 180 0.7 2e-4 1.4e-6 2.1e-3 4.8e15 1.93
39 180 0.6 2e-4 1.3e-6 2.1e-3 5.2e15 2.08
48 180 0.5 2e-4 1.2e-6 2.1e-3 5.7e15 2.28
56 180 0.4 3e-4 1.1e-6 2.6e-3 4.2e15 1.70
65 180 0.3 3e-4 9.3e-7 2.6e-3 4.9e15 1.97
               
7 210 5.0 7e-5 3.3e-6 1.2e-3 6.0e15 2.12
8 210 5.0 6e-5 3.3e-6 1.1e-3 7.0e15 2.48
13 210 2.0 1e-4 2.1e-6 1.5e-3 6.6e15 2.35
21 210 1.0 2e-4 1.5e-6 2.1e-3 4.7e15 1.66
30 210 0.7 2e-4 1.2e-6 2.1e-3 5.6e15 1.99
39 210 0.6 2e-4 1.1e-6 2.1e-3 6.1e15 2.15
48 210 0.5 2e-4 6.6e15 2.1e-3 6.6e15 2.35
56 210 0.4 3e-4 9.2e-7 2.6e-3 5.0e15 1.75
65 210 0.3 3e-4 8.0e-7 2.6e-3 5.7e15 2.02
               
7 240 5.0 7e-5 2.9e-6 1.2e-3 6.9e15 2.10
8 240 5.0 6e-5 2.9e-6 1.1e-3 8.0e15 2.45
13 240 2.0 1e-4 1.8e-6 1.5e-3 7.6e15 2.32
21 240 1.0 2e-4 1.3e-6 2.1e-3 5.4e15 1.65
30 240 0.7 2e-4 1.1e-6 2.1e-3 6.4e15 1.97
39 240 0.6 2e-4 9.9e-7 2.1e-3 6.9e15 2.12
48 240 0.5 2e-4 9.0e-7 2.1e-3 7.6e15 2.33
56 240 0.4 3e-4 8.1e-7 2.6e-3 5.7e15 1.73
65 240 0.3 3e-4 7.0e-7 2.6e-3 6.5e15 2.00
               
7 270 5.0 7e-5 2.5e-6 1.2e-3 7.7e15 2.00
8 270 5.0 6e-5 2.5e-6 1.1e-3 9.0e15 2.33
13 270 2.0 1e-4 1.6e-6 1.5e-3 8.5e15 2.21
14 270 2.0 9e-5 1.6e-6 1.4e-3 9.5e15 2.46
21 270 1.0 2e-4 1.1e-6 2.1e-3 6.0e15 1.56
30 270 0.7 2e-4 9.5e-7 2.1e-3 7.2e15 1.87
39 270 0.6 2e-4 8.8e-7 2.1e-3 7.8e15 2.02
48 270 0.5 2e-4 8.0e-7 2.1e-3 8.5e15 2.21
57 270 0.4 2e-4 7.2e-7 2.1e-3 9.6e15 2.47
65 270 0.3 3e-4 6.2e-7 2.6e-3 7.4e15 1.90



Tcoh 180h mmHist_v1_Tcoh180_best.png
Tcoh 210h mmHist_v1_Tcoh210_best.png
Tcoh 240h mmHist_v1_Tcoh240_best.png
Tcoh 270h mmHist_v1_Tcoh270_best.png

Now, comparing all of these to the Tcoh60 setup. I'm going to look at the ratio of Tcohs, as well as the ratio of avg mismatch. Both will affect signal 2F, and therefore SNR^2; the first will increase a signal's SNR but the latter will decrease the SNR of the loudest recovered candidate.
Tcoh m_avg ratio_Tcohs ratio_mms SNR / SNR_Tcoh60
60 0.33 1 1 1
90 0.41 1.5 1.24 1.2
120 0.48 2 1.45 1.4
150 0.54 2.5 1.64 1.5
180 0.57 3 1.73 1.7
210 0.60 3.5 1.82 1.9
240 0.62 4 1.88 2.1
270 0.64 4.5 1.94 2.3
It turns out that the longer Tcoh might actually be better for signals ... ?

Efficiency studies

As I've previously seen, the 2F threshold really only depends on Tcoh, since the number of templates is pretty similar for setups with the same Tcoh, and --- of course --- the number of false alarms we want. As previously discussed, nFA = 1 is maybe too stringent. In O1AS, we had 15 million candidates pass stage 0, which was reduced to 36000 after clustering, a noise rejection of >99%. However, in O1AS we had let in a lot of disturbances that looked vaguely signal-like, which we don't expect to do here. In O1MD, the noise rejection of clustering was closer to 70%.
  • If we take the number of 36000 candidates after clustering, since our frequency range is now 480 Hz instead of 80 Hz, I will assume 2e5 candidates to follow up after clustering.
  • If we assume a noise rejection closer to O1MD, this means roughly 5e5 candidates that pass stage 0 of this search. => nFA = 5e5
I'm going to be doing injections at a handful of different h0 values that cover both nFA=1 and nFA=5e5. The thresholds then become:

Tcoh thr (nFA = 1) thr (nFA = 5e5)
60 8.17 7.06
90 9.36 7.91
120 10.45 8.69
150 11.51 9.43
180 12.38 10.04
210 13.16 10.58
240 14.14 11.26
270 15.45 12.17
So, when I make injections, I should make sure to cover at least this range of 2Fs within my range.

To get an idea for the 2F values, I'm using PredictFstat at a few different h0 values, with the SFT timestamps for this search. I gave it a declination of ~1.2 and a cosi of 0.5. Here are the predicted Fstat values:

h0

2F: coherent

nseg = 1

2F: Tcoh 60

nseg = 49

Tcoh 90

nseg = 33

120

nseg = 25

150

nseg = 20

180

nseg = 17

210

nseg = 15

240

nseg = 13

270

nseg = 11

1e-26 6                
3e-26 27 4.5 4.7 5 5.1 5.3 5.5 5.8 6
5e-26 67 5.3 5.9 6.5 7.1 7.7 8.2 8.8 9.7
7e-26 1.3e2 6.6 7.8 9 10 11 12 14 15
1e-25 2.5e2 9 11 14 16 18 20 23 26
So I should make injections with h0 values around 7e-26 to 1e-25.

Note: Ultimately I chose to perform the efficiency studies for a set of nFA values: 1, 1e3, 3e4, 3e6.

Here are a set of plots for each of the setups I am interested in. Each column corresponds to a different nFA for determining the threshold. Each row represents a different Tcoh; within a given row, the best setup with mfreq < 1 is shown in blue and the best setup with mfreq > 1 is shown in orange.

sigmoids_set0.png

I performed the sigmoid fits in the same way they were done previously for the S6CasA (and other search) ULs: ignoring any points with efficiency at 100% and below 30%, and including the binomial uncertainties (sqrt(Npq)) on the efficiency.

To get the h0_90% values, I just have to find where these curves cross 0.9 on the y axis. Next, I want to compare the h0_90% values across the different setups, for a given nFA:

h0vsTcoh_nFA1_set0.pngh0vsTcoh_nFA1e3_set0.png

h0vsTcoh_nFA3e4_set0.pngh0vsTcoh_nFA3e6_set0.png

In general, it seems like the {Tcoh:90, mfreq:>1} option is the best or one of the best ones. Note that most of these points overlap, but there is a trend towards the lower Tcoh values doing better, with a minimum at Tcoh:90.

Timing

The next step is to check how long these searches will take. They were all estimated to take around 2.5 months, but this is not necessarily accurate, especially given how different a zero-spindown search is.

To test this, I ran searches over 50 mHz and 10000 skygrid points at a time, on a few compute nodes on atlas (a3701 to a3708), that are approx equivalent to an average computer. For each setup, I ran searches at sets of frequencies covering the search space; this is because the sideband increases linearly with increasing frequency, since the size of the Doppler (and spindown, when applicable) wing is proportional to frequency. By knowing (roughly) how the timing increases w/ frequency due to both the sidebands and the increasing number of skygrid points, I can get an idea for the overall runtime for a given 50 mHz band, and then sum the runtimes up to get the total estimated runtime.

Some details (i.e., braindump):
  • Instead of calling lalapps directly, I used a standalone executable that was one of the ones sent to the E@H volunteers. The one I used is on atlas at /home/einstein/EinsteinAtHome_Runs/O1MD1CV/apps/einstein_O1MD1CV_1.00_x86_64-pc-linux-gnu__AVX, but the more up to date ones (currently) are in /home/einstein/EinsteinAtHome_Runs/O2AS20-500/apps.
  • I used the search options from O1MD1 as much as possible; i.e., I took the Fstar0 and other params related to the Bayesian statistics from that search, and also specified the same Fstat calculation methods: --FstatMethod=ResampBest --FstatMethodRecalc=DemodBest. In general, Resamp will always be faster than Demod so long as the number of frequency bins being searched is large (>13000 is the number I was given).
  • But, note that Resamp requires the number of frequency bins to be a power of 2, and will zero-pad the ends to achieve the next power of two. Thus, the runtime does not increase smoothly but instead is a series of step functions, and increases whenever the number of frequency bins goes over a power of 2.
  • After estimating the total runtime, I still need to divide by the number of E@H "nodes" to get the overall E@H runtime ("EM" = "E@H month"). From Sinéad's scripts, the number of nodes she uses in her estimates (which was updated in January 2018) is 13000; this is more an effective number of users rather than an actual number of users, as it folds in things like the difference between the "slow" and "fast" hosts. (The E@H volunteer computers can be roughly divided into these two populations.) I also divided this number by 1.2, to account for the loss of some LSC clusters (if you're reading this in the future: We have just left the LSC). Note that, overall, there is going to be a larger than normal uncertainty in the runtime, as it's uncertain what will happen to the number of users.
  • For picking the 10000 skygrid points, I initially just took either the first 10000 points from an all-sky file or a skypatch, BUT given that the sidebands matter a lot for the timing of this particular search (and here the sidebands are entirely determined by the Doppler wings), I had to redo the tests with a completely uniformly random skygrid.
Here is an example of what the results look like:

Screen_Shot_2018-08-16_at_14.36.11.png

9 EM is twice as long as we want (we are aiming for closer to 4). The other setup at this Tcoh had a smaller mfreq and larger msky. For a given setup, the sidebands increase linearly with frequency but the number of skygrid points increases quadratically; so in order to get the runtime down, I really want to focus on the setups with small mfreq and large msky.

At this point, I iterated over the setups many times to try to find an appropriate one, and to correct everything I had wrong before. Out of the setups I tested, the only one that had an estimated time of less than 5 EM was {Tcoh:60, mfreq:0.4, msky:2e-4}:

Screen_Shot_2018-09-18_at_10.04.26.png

Note that the step function seen in the first plot is because of how Resamp behaves, as described above.

This (and looking more carefully over Sinéad's documentation) led me to realize that I shouldn't just use this dfreq as it is but instead take two things into account:
  • Making sure NsFFTs is not slightly above a power of 2 (which causes Resamp to go to the next power of 2), and
  • Making sure 0.05/dFreq is roughly a whole number ...
-- SylviaZhu - 22 May 2018
I Attachment Action Size Date Who Comment
2FsInGaussNoise_best.pngpng 2FsInGaussNoise_best.png manage 208 K 18 Jul 2018 - 16:42 SylviaZhu  
Screen_Shot_2018-07-16_at_14.58.30.pngpng Screen_Shot_2018-07-16_at_14.58.30.png manage 21 K 16 Jul 2018 - 13:00 SylviaZhu  
Screen_Shot_2018-07-16_at_14.59.04.pngpng Screen_Shot_2018-07-16_at_14.59.04.png manage 22 K 16 Jul 2018 - 13:00 SylviaZhu  
Screen_Shot_2018-07-16_at_14.59.48.pngpng Screen_Shot_2018-07-16_at_14.59.48.png manage 22 K 16 Jul 2018 - 13:00 SylviaZhu  
Screen_Shot_2018-07-16_at_17.43.45.pngpng Screen_Shot_2018-07-16_at_17.43.45.png manage 55 K 16 Jul 2018 - 15:44 SylviaZhu  
Screen_Shot_2018-07-16_at_17.48.06.pngpng Screen_Shot_2018-07-16_at_17.48.06.png manage 17 K 16 Jul 2018 - 15:49 SylviaZhu  
Screen_Shot_2018-07-16_at_17.56.07.pngpng Screen_Shot_2018-07-16_at_17.56.07.png manage 18 K 16 Jul 2018 - 15:56 SylviaZhu  
Screen_Shot_2018-08-16_at_14.36.11.pngpng Screen_Shot_2018-08-16_at_14.36.11.png manage 53 K 16 Aug 2018 - 12:37 SylviaZhu  
Screen_Shot_2018-08-16_at_14.52.05.pngpng Screen_Shot_2018-08-16_at_14.52.05.png manage 56 K 16 Aug 2018 - 12:52 SylviaZhu  
Screen_Shot_2018-09-18_at_10.04.26.pngpng Screen_Shot_2018-09-18_at_10.04.26.png manage 194 K 18 Sep 2018 - 08:05 SylviaZhu  
h0vsTcoh_nFA1_set0.pngpng h0vsTcoh_nFA1_set0.png manage 20 K 10 Aug 2018 - 09:58 SylviaZhu  
h0vsTcoh_nFA1e3_set0.pngpng h0vsTcoh_nFA1e3_set0.png manage 21 K 10 Aug 2018 - 09:58 SylviaZhu  
h0vsTcoh_nFA3e4_set0.pngpng h0vsTcoh_nFA3e4_set0.png manage 21 K 10 Aug 2018 - 09:58 SylviaZhu  
h0vsTcoh_nFA3e6_set0.pngpng h0vsTcoh_nFA3e6_set0.png manage 21 K 10 Aug 2018 - 09:58 SylviaZhu  
mmHist_example_Tcoh60_mfreq5.0.pngpng mmHist_example_Tcoh60_mfreq5.0.png manage 29 K 17 Jul 2018 - 18:07 SylviaZhu  
mmHist_v0_Tcoh150.pngpng mmHist_v0_Tcoh150.png manage 57 K 16 Jul 2018 - 15:20 SylviaZhu  
mmHist_v0_Tcoh180.pngpng mmHist_v0_Tcoh180.png manage 56 K 16 Jul 2018 - 15:20 SylviaZhu  
mmHist_v0_Tcoh210.pngpng mmHist_v0_Tcoh210.png manage 55 K 16 Jul 2018 - 15:20 SylviaZhu  
mmHist_v0_Tcoh240.pngpng mmHist_v0_Tcoh240.png manage 55 K 16 Jul 2018 - 15:20 SylviaZhu  
mmHist_v0_Tcoh270.pngpng mmHist_v0_Tcoh270.png manage 55 K 16 Jul 2018 - 15:20 SylviaZhu  
mmHist_v1_Tcoh120_best.pngpng mmHist_v1_Tcoh120_best.png manage 37 K 18 Jul 2018 - 15:35 SylviaZhu  
mmHist_v1_Tcoh120_mfreqGE1.pngpng mmHist_v1_Tcoh120_mfreqGE1.png manage 41 K 18 Jul 2018 - 15:35 SylviaZhu  
mmHist_v1_Tcoh120_mfreqLT1.pngpng mmHist_v1_Tcoh120_mfreqLT1.png manage 54 K 18 Jul 2018 - 15:35 SylviaZhu  
mmHist_v1_Tcoh150_best.pngpng mmHist_v1_Tcoh150_best.png manage 38 K 18 Jul 2018 - 15:36 SylviaZhu  
mmHist_v1_Tcoh150_mfreqGE1.pngpng mmHist_v1_Tcoh150_mfreqGE1.png manage 42 K 18 Jul 2018 - 15:35 SylviaZhu  
mmHist_v1_Tcoh150_mfreqLT1.pngpng mmHist_v1_Tcoh150_mfreqLT1.png manage 55 K 18 Jul 2018 - 15:36 SylviaZhu  
mmHist_v1_Tcoh180_best.pngpng mmHist_v1_Tcoh180_best.png manage 38 K 20 Jul 2018 - 10:49 SylviaZhu  
mmHist_v1_Tcoh210_best.pngpng mmHist_v1_Tcoh210_best.png manage 33 K 20 Jul 2018 - 10:49 SylviaZhu  
mmHist_v1_Tcoh240_best.pngpng mmHist_v1_Tcoh240_best.png manage 33 K 20 Jul 2018 - 10:50 SylviaZhu  
mmHist_v1_Tcoh270_best.pngpng mmHist_v1_Tcoh270_best.png manage 32 K 20 Jul 2018 - 10:50 SylviaZhu  
mmHist_v1_Tcoh60_best.pngpng mmHist_v1_Tcoh60_best.png manage 38 K 18 Jul 2018 - 15:35 SylviaZhu  
mmHist_v1_Tcoh60_mfreqGE1.pngpng mmHist_v1_Tcoh60_mfreqGE1.png manage 46 K 18 Jul 2018 - 15:42 SylviaZhu  
mmHist_v1_Tcoh60_mfreqLT1.pngpng mmHist_v1_Tcoh60_mfreqLT1.png manage 54 K 18 Jul 2018 - 15:35 SylviaZhu  
mmHist_v1_Tcoh90_best.pngpng mmHist_v1_Tcoh90_best.png manage 38 K 18 Jul 2018 - 15:35 SylviaZhu  
mmHist_v1_Tcoh90_mfreqGE1.pngpng mmHist_v1_Tcoh90_mfreqGE1.png manage 46 K 18 Jul 2018 - 15:35 SylviaZhu  
mmHist_v1_Tcoh90_mfreqLT1.pngpng mmHist_v1_Tcoh90_mfreqLT1.png manage 54 K 18 Jul 2018 - 15:35 SylviaZhu  
sigmoids_set0.pngpng sigmoids_set0.png manage 222 K 10 Aug 2018 - 09:48 SylviaZhu  
Topic revision: r23 - 18 Sep 2018, SylviaZhu
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback