###### RELATED CATEGORIES:
Questions in Science & Mathematics VIEWS POSTED BOOKMARK
4 days ago Bookmark It
I am trying to determine the magnitude error between demodulated I,Q values and reference values on a constellation plot for a QPSK signal. The reference points are at (1,0),(0,1),(0,-1), and (-1,0). Of course as the amplitude of the input signal increases, the measured points move further away from the references. My (limited) understanding is that measured points would be near the expected to make the resulting constellation useful - where the proximity is affected by the quality of the signal, but not the amplitude. Is there a standard scaling method used to shift the measured output to the range of the references for this purpose?

This question has been viewed 15 times

4 days ago Bookmark It
I have only the color dataset (not depth and confidence) of this dataset, I should distinguish hand gestures, but the first step is to recognize the hand. I used the skin color method but my problem is that I don't have train data to distinguish hand from face, and I can't find previous works which have done this before. I tried other color spaces, but there is no difference between face and hand, and there is nothing to find the depth of objects. One of my results is as follow:

This question has been viewed 9 times

4 days ago Bookmark It

Sorry, no further information was provided for this question.

This question has been viewed 187,000 times

SOCIAL MEDIA SHARES
4 days ago Bookmark It

Sorry, no further information was provided for this question.

This question has been viewed 440 times

4 days ago Bookmark It
In Melliès’ survey Categorical Semantics of Linear Logic, a cut elimination procedure for intuitionistic linear logic is given which includes the following case: 3.9.3 Promotion vs.

This question has been viewed 65 times

SOCIAL MEDIA SHARES
4 days ago Bookmark It
Firstly note this paper http://ttic. uchicago. edu/~madhurt/Papers/reductions. pdf where a Lasserre SDP is being setup for the independent set probblem at the bottom of page 4 where the author says says, "It can be shown that for any set S with |S| ≤ t, the vectors $U_{S'}$, S' ⊆ S induce a probability distribution over valid independent sets of the subgraph induced by S.

This question has been viewed 7 times

4 days ago Bookmark It
I am currently in my last year of Economics and Mathematics bachelors program in Pakistan. One of the requirements is to compete a research project. I need ideas regarding topics related to Economics/Pakistan's economy etc, but I don't want to reinvent the wheel. Empirical research in any area of economics is feasible as long as it is not too narrowed down. Any sort of help would be welcome. Thank you!

This question has been viewed 31 times

4 days ago Bookmark It
In this paper, they are using Canadian debit card dataset. I am not sure where to get such a dataset. http://www. cirano. qc. ca/pdf/publication/2009s-23. pdf My broad question is what kind of indicator can I use that indicates real-time economic activity of the people? Something that indicates how much people are buying on an hourly basis for an example. I am willing to look at unconventional datasets or willing to scrape information from scratch. I thought about using Amazon review timestamps for example, but I know want to when people are buying something. Specially over the United States or smaller.

This question has been viewed 23 times

4 days ago Bookmark It
I have an existing classification model which uses labelled data to do categorization of short texts. I am getting new data everyday and want to improve the classifier by making use of this data. So, in a regular interval, a new model is created for categorization. But, creating training data is a difficult task. I am thinking of methods by which I can reduce efforts for creating more training data. A few methods which I thought are: Clustering- Group the short text data using some clustering approaches. But, finding the no of clusters could be a problem. Use the existing categorization model- There's already a good classifier built for this.

This question has been viewed 11 times

4 days ago Bookmark It
let $(y_i, x_i)$ be a pair of data and $x_i\in\mathbb{R}^N$ is the feature vector and $y_i \in \{ 0,1 \}$ is the outcome. Given a predictive model $f(x,\theta): \mathbb{R}^N \times \mathbb{R}^M \to [0,1]$, it is common that performance of $f$ is evaluated by the logloss function( https://www. kaggle. com/c/criteo-display-ad-challenge/details/evaluation, https://www. kaggle. com/c/avazu-ctr-prediction/details/evaluation): $\frac{-1}{K}\sum_i^K [ y_i log (f(x_i,\theta)) + (1-y_i) log(1 - f(x_i,\theta))]$ Yet under this context the value of logloss will be affected by the actual probability of $y$ For instance, given two ad copies, one with feature $x_1$ and CTR 0.

This question has been viewed 16 times