These issues are related to the user’s web browser, due to a process called browser caching. One can solve this by using the following key combination while your browser is active.
For Chrome: Ctrl+F5
or Shift+F5
For Firefox: Ctrl+F5
For more information, check the following two links:
Be it for the Analyse
tab or Explore
tab, the normalization is performed on the entire table where the values are integers (non-decimal data / count data). Therefore, it is highly recomended that one would remove those numeric columns from the data file that does not belong to the experiment especially if they are integers.
Good to Know: In order to avoid computational errors while dealing with the data containing zero values, a count of +1 is added by default to the entire data set (only integer columns) soon after importing the data. One can notice this from the table in the Import
tab. If you are sure that your data doesn’t contain any zeros, or if you are only interested in using your data to produce plots in the Explore tab, you could opt-out of it by clicking the check box in the Import tab.
As one has observed, the “Control Filter” in the Analyse Tab filters the resulting table only based on the controls (ctrl_a
and ctrl_b
). If you’d want to explore this filtered data set in the Explore Panel, you could download this result and reupload it and head to the Explore Panel, set the filter to zero and explore your data.
Lets say, in the
Analyse
tab you have assignedctr_a
with the samplest1.samp_r1
andt1.samp_r2
and you have assignedexp_a
with the samplest2.samp_r1
andt2.samp_r2
where, \(t\) stands fortime point
and \(r\) stands forreplicate
.
In the checked state (figure above): the log2 fold changes between ctrl_a and exp_a will be calculated with the mean counts
of the samples assigned to them.
\[ctrl\_a = \frac{t1.samp\_r1 + t1.samp\_r2}{2}\] \[exp\_a = \frac{t2.samp\_r1 + t2.samp\_r2}{2}\] \[\mathbf{par\_a} = log2\Big(\frac{exp\_a}{ctrl\_a}\Big)\]
These simplified equations explain the calculation of par_a
in the case of the above example. When the slider limits are set for \(\mathbf{par\_a}\) , with means
mode picks those candidates that satisfy these limits for \(\mathbf{par\_a}\). The other parameters par_b
, par_d
and par_e
are calculated similarly. The par_c
is calculated as,
\[par\_c = par\_a - par\_b\]
\[par\_a_1 = log2\Big(\frac{t2.samp\_r1}{t1.samp\_r1}\Big) \] \[par\_a_2 = log2\Big(\frac{t2.samp\_r2}{t1.samp\_r2}\Big) \]
When the slider limits are set for \(\mathbf{par\_a}\), per replicate
mode ensures that it picks only those candidates that satisfy these limits for both \(par\_a_1\) and \(par\_a_2\). The other parameters par_b
, par_d
and par_e
are calculated similarly. The par_c
is then calculated as,
\[par\_c_1 = par\_a_1 - par\_b_1\] \[par\_c_2 = par\_a_2 - par\_b_2\]
NOTE: One will notice that even when the program runs in per replicate
mode, the par_a
, par_b
…etc. values in the result table contains the values as if they were run by with means
mode. This is done purely for the sake of convinience because, if there are n
number of replicates, the program would generate n
number of parameter values and thereby the table would expand so much that it makes no sense to look at those values. Therefore, the result table has the abstract values although the behaviour of the slider limits is different for the selected modes.