首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   312篇
  免费   13篇
管理学   27篇
民族学   9篇
人口学   22篇
理论方法论   24篇
综合类   1篇
社会学   101篇
统计学   141篇
  2023年   3篇
  2022年   4篇
  2021年   8篇
  2020年   22篇
  2019年   28篇
  2018年   24篇
  2017年   34篇
  2016年   11篇
  2015年   3篇
  2014年   12篇
  2013年   66篇
  2012年   13篇
  2011年   12篇
  2010年   7篇
  2009年   6篇
  2008年   4篇
  2007年   4篇
  2006年   4篇
  2005年   10篇
  2004年   4篇
  2003年   4篇
  2002年   5篇
  2001年   6篇
  2000年   2篇
  1999年   6篇
  1997年   3篇
  1996年   2篇
  1994年   1篇
  1992年   1篇
  1991年   2篇
  1986年   1篇
  1984年   1篇
  1983年   2篇
  1982年   3篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
  1976年   2篇
  1975年   1篇
排序方式: 共有325条查询结果,搜索用时 15 毫秒
1.
This paper examines the re-aestheticisation of hunger and poverty with the emergence of austerity blogs. These blogs, which chronicle personal narratives while re-directing gaze in creating food through limited budgets and in sharing the intimate brutalities of hunger, bring a renewed focus and interest to poverty through daily lived experiences of hunger. Beyond personalising hunger in a climate of austerity, blogs as a symbol of articulation of the laypeople for the general public become interstitial spaces between government rhetoric and media representations, making poverty an intimate, personal and present proposition. Blogs as peoples’ archives of social history are hybrid spaces of personal iterations amenable to public consumption and media scrutiny. In the process these can re-mediate and disrupt the social reality of first-world hunger, inviting a gaze through first-hand narratives. Poverty becomes a contested entity online where blogs perform both resistance and reiteration of the neo-liberal stereotypes about the unemployed and those on benefits.  相似文献   
2.
Summary.  Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results.  相似文献   
3.
4.
This note is an answer to a previous model on conformity in public goods contributions developed by Carpenter (2004), where a population evolution is allowed according to the standard replicator dynamic (Taylor and Jonker, 1978, Maynard Smith, 1982). To confirm his theoretical prediction, Carpenter developed an experiment showing that free riding actually grows faster when agents have the information necessary to conform. The model and the experiment are, however, inherently different, for the time scales of the model are not able to capture the short run convergence of behavior in the experimental laboratory.We here present a model of conformity which offers the same laboratory results as Carpenter without resorting to evolutionary models, and also gives agents the chance to adopt different strategies implying various levels of cooperation.  相似文献   
5.
In this paper, we propose the hard thresholding regression (HTR) for estimating high‐dimensional sparse linear regression models. HTR uses a two‐stage convex algorithm to approximate the ?0‐penalized regression: The first stage calculates a coarse initial estimator, and the second stage identifies the oracle estimator by borrowing information from the first one. Theoretically, the HTR estimator achieves the strong oracle property over a wide range of regularization parameters. Numerical examples and a real data example lend further support to our proposed methodology.  相似文献   
6.
For estimating an unknown parameter θ, we introduce and motivate the use of balanced loss functions of the form Lr, w, d0(q, d)=wr(d0, d)+ (1-w) r(q, d){L_{\rho, \omega, \delta_0}(\theta, \delta)=\omega \rho(\delta_0, \delta)+ (1-\omega) \rho(\theta, \delta)}, as well as the weighted version q(q) Lr, w, d0(q, d){q(\theta) L_{\rho, \omega, \delta_0}(\theta, \delta)}, where ρ(θ, δ) is an arbitrary loss function, δ 0 is a chosen a priori “target” estimator of q, w ? [0,1){\theta, \omega \in[0,1)}, and q(·) is a positive weight function. we develop Bayesian estimators under Lr, w, d0{L_{\rho, \omega, \delta_0}} with ω > 0 by relating such estimators to Bayesian solutions under Lr, w, d0{L_{\rho, \omega, \delta_0}} with ω = 0. Illustrations are given for various choices of ρ, such as absolute value, entropy, linex, and squared error type losses. Finally, under various robust Bayesian analysis criteria including posterior regret gamma-minimaxity, conditional gamma-minimaxity, and most stable, we establish explicit connections between optimal actions derived under balanced and unbalanced losses.  相似文献   
7.
In this article, based on generalized order statistics from a family of proportional hazard rate model, we use a statistical test to generate a class of preliminary test estimators and shrinkage preliminary test estimators for the proportionality parameter. These estimators are compared under Pitman measure of closeness (PMC) as well as MSE criteria. Although the PMC suffers from non transitivity, in the first class of estimators, it has the transitivity property and we obtain the Pitman-closest estimator. Analytical and graphical methods are used to show the range of parameter in which preliminary test and shrinkage preliminary test estimators perform better than their competitor estimators. Results reveal that when the prior information is not too far from its real value, the proposed estimators are superior based on both mentioned criteria.  相似文献   
8.
Muslims constitute about 14% population of India and are the largest religious minority community spread over the length and breadth of the country. The minority community in question has been relegated to the lowest socio-economic stratum in Indian society especially after the partition and independence of the country. However, in the state of Jammu and Kashmir, Muslims are in majority constituting about 67% population of the state. In the current study, the Concentration Index of Muslim population, variation in literacy rate and work participation, occupational structure across region and religion, as well as the interrelationship between concentration of Muslim population, literacy rate and work participation in Jammu and Kashmir has been explored and explained. The present study is based upon secondary information obtained from Census 2001 and is also supplemented with government reports, published work wherever necessary. As far as share of Muslims in the sphere of education and employment in the state of Jammu and Kashmir is concerned, they have reported lower share among the population of literates, category of other workers and higher share in the occupational category of cultivators, agricultural labourers, household industry workers and non-workers in comparison to all religious groups. This means that despite being in majority, their situation is similar to their co-religionists at the all India level.  相似文献   
9.
In clinical trials, missing data commonly arise through nonadherence to the randomized treatment or to study procedure. For trials in which recurrent event endpoints are of interests, conventional analyses using the proportional intensity model or the count model assume that the data are missing at random, which cannot be tested using the observed data alone. Thus, sensitivity analyses are recommended. We implement the control‐based multiple imputation as sensitivity analyses for the recurrent event data. We model the recurrent event using a piecewise exponential proportional intensity model with frailty and sample the parameters from the posterior distribution. We impute the number of events after dropped out and correct the variance estimation using a bootstrap procedure. We apply the method to an application of sitagliptin study.  相似文献   
10.
This study presents an investigation of enhancing the capability of the Scatter Search (SS) metaheuristic in guiding the search effectively toward elite solutions. Generally, SS generates a population of random initial solutions and systematically selects a set of diverse and elite solutions as a reference set for guiding the search. The work focuses on three strategies that may have an impact on the performance of SS. These are: explicit solutions combination, dynamic memory update, and systematic search re-initialization. First, the original SS is applied. Second, we propose two versions of the SS (V1 and V2) with different strategies. In contrast to the original SS, SSV1 and SSV2 use the quality and diversity of solutions to create and update the memory, to perform solutions combinations, and to update the search. The differences between SSV1 and SSV2 is that SSV1 employs the hill climbing routine twice whilst SSV2 employs hill climbing and iterated local search method. In addition, SSV1 combines all pairs (of quality and diverse solutions) from the RefSet whilst SSV2 combines only one pair. Both SSV1 and SSV2 update the RefSet dynamically rather than static (as in the original SS), where, whenever a better quality or more diverse solution is found, the worst solution in RefSet is replaced by the new solution. SSV1 and SSV2 employ diversification generation method twice to re-initialize the search. The performance of the SS is tested on three benchmark post-enrolment course timetabling problems. The results had shown that SSV2 performs better than the original SS and SSV1 (in terms of solution’s quality and computational time). It clearly demonstrates the effectiveness of using dynamic memory update, systematic search re-initialization, and combining only one pair of elite solutions. Apart from that, SSV1 and SSV2 can produce good quality solutions (comparable with other approaches), and outperforms some approaches reported in the literature (on some instances with regards to the tested datasets). Moreover, the study shows that by combining (simple crossover) only one pair of elite solutions in each RefSet update, and updating the memory dynamically, the computational time is reduced.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号