首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1988篇
  免费   125篇
管理学   261篇
民族学   20篇
人才学   1篇
人口学   202篇
丛书文集   14篇
理论方法论   289篇
综合类   21篇
社会学   1028篇
统计学   277篇
  2024年   4篇
  2023年   26篇
  2022年   15篇
  2021年   33篇
  2020年   74篇
  2019年   95篇
  2018年   88篇
  2017年   114篇
  2016年   116篇
  2015年   75篇
  2014年   73篇
  2013年   303篇
  2012年   104篇
  2011年   99篇
  2010年   79篇
  2009年   70篇
  2008年   61篇
  2007年   62篇
  2006年   61篇
  2005年   50篇
  2004年   62篇
  2003年   47篇
  2002年   39篇
  2001年   22篇
  2000年   22篇
  1999年   24篇
  1998年   25篇
  1997年   19篇
  1996年   19篇
  1995年   18篇
  1994年   28篇
  1993年   19篇
  1992年   23篇
  1991年   14篇
  1990年   10篇
  1989年   11篇
  1988年   11篇
  1987年   5篇
  1986年   6篇
  1985年   11篇
  1984年   5篇
  1983年   7篇
  1982年   5篇
  1981年   5篇
  1980年   9篇
  1979年   9篇
  1978年   5篇
  1976年   7篇
  1974年   4篇
  1971年   4篇
排序方式: 共有2113条查询结果,搜索用时 234 毫秒
911.
In clinical trials, continuous monitoring of event incidence rate plays a critical role in making timely decisions affecting trial outcome. For example, continuous monitoring of adverse events protects the safety of trial participants, while continuous monitoring of efficacy events helps identify early signals of efficacy or futility. Because the endpoint of interest is often the event incidence associated with a given length of treatment duration (e.g., incidence proportion of an adverse event with 2 years of dosing), assessing the event proportion before reaching the intended treatment duration becomes challenging, especially when the event onset profile evolves over time with accumulated exposure. In particular, in the earlier part of the study, ignoring censored subjects may result in significant bias in estimating the cumulative event incidence rate. Such a problem is addressed using a predictive approach in the Bayesian framework. In the proposed approach, experts' prior knowledge about both the frequency and timing of the event occurrence is combined with observed data. More specifically, during any interim look, each event‐free subject will be counted with a probability that is derived using prior knowledge. The proposed approach is particularly useful in early stage studies for signal detection based on limited information. But it can also be used as a tool for safety monitoring (e.g., data monitoring committee) during later stage trials. Application of the approach is illustrated using a case study where the incidence rate of an adverse event is continuously monitored during an Alzheimer's disease clinical trial. The performance of the proposed approach is also assessed and compared with other Bayesian and frequentist methods via simulation. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
912.
Testing between hypotheses, when independent sampling is possible, is a well developed subject. In this paper, we propose hypothesis tests that are applicable when the samples are obtained using Markov chain Monte Carlo. These tests are useful when one is interested in deciding whether the expected value of a certain quantity is above or below a given threshold. We show non-asymptotic error bounds and bounds on the expected number of samples for three types of tests, a fixed sample size test, a sequential test with indifference region, and a sequential test without indifference region. Our tests can lead to significant savings in sample size. We illustrate our results on an example of Bayesian parameter inference involving an ODE model of a biochemical pathway.  相似文献   
913.
Within the microelectronics industry, there is a growing concern regarding the introduction of counterfeit electronic parts into the supply chain. Even though this problem is widespread, there have been limited attempts to implement risk‐based approaches for testing and supply chain management. Supply chain risk management tends to focus on the highly visible disruptions of the supply chain instead of the covert entrance of counterfeits; thus counterfeit risk is difficult to mitigate. This article provides an overview of the complexities of the electronics supply chain, and highlights some gaps in risk assessment practices. In particular, this article calls for enhanced traceability capabilities to track and trace parts at risk through various stages of the supply chain. Placing the focus on risk‐informed decision making through the following strategies is needed, including prioritization of high‐risk parts, moving beyond certificates of conformance, incentivizing best supply chain management practices, adoption of industry standards, and design and management for supply chain resilience.  相似文献   
914.
915.
The article details a sampling scheme which can lead to a reduction in sample size and cost in clinical and epidemiological studies of association between a count outcome and risk factor. We show that inference in two common generalized linear models for count data, Poisson and negative binomial regression, is improved by using a ranked auxiliary covariate, which guides the sampling procedure. This type of sampling has typically been used to improve inference on a population mean. The novelty of the current work is its extension to log-linear models and derivations showing that the sampling technique results in an increase in information as compared to simple random sampling. Specifically, we show that under the proposed sampling strategy the maximum likelihood estimate of the risk factor’s coefficient is improved through an increase in the Fisher’s information. A simulation study is performed to compare the mean squared error, bias, variance, and power of the sampling routine with simple random sampling under various data-generating scenarios. We also illustrate the merits of the sampling scheme on a real data set from a clinical setting of males with chronic obstructive pulmonary disease. Empirical results from the simulation study and data analysis coincide with the theoretical derivations, suggesting that a significant reduction in sample size, and hence study cost, can be realized while achieving the same precision as a simple random sample.  相似文献   
916.
The primary aim of market segmentation is to identify relevant groups of consumers that can be addressed efficiently by marketing or advertising campaigns. This paper addresses the issue whether consumer groups can be identified from background variables that are not brand-related, and how much personality vs. socio-demographic variables contribute to the identification of consumer clusters. This is done by clustering aggregated preferences for 25 brands across 5 different product categories, and by relating socio-demographic and personality variables to the clusters using logistic regression and random forests over a range of different numbers of clusters. Results indicate that some personality variables contribute significantly to the identification of consumer groups in one sample. However, these results were not replicated on a second sample that was more heterogeneous in terms of socio-demographic characteristics and not representative of the brands target audience.  相似文献   
917.
Essential elements such as copper and manganese may demonstrate U‐shaped exposure‐response relationships due to toxic responses occurring as a result of both excess and deficiency. Previous work on a copper toxicity database employed CatReg, a software program for categorical regression developed by the U.S. Environmental Protection Agency, to model copper excess and deficiency exposure‐response relationships separately. This analysis involved the use of a severity scoring system to place diverse toxic responses on a common severity scale, thereby allowing their inclusion in the same CatReg model. In this article, we present methods for simultaneously fitting excess and deficiency data in the form of a single U‐shaped exposure‐response curve, the minimum of which occurs at the exposure level that minimizes the probability of an adverse outcome due to either excess or deficiency (or both). We also present a closed‐form expression for the point at which the exposure‐response curves for excess and deficiency cross, corresponding to the exposure level at which the risk of an adverse outcome due to excess is equal to that for deficiency. The application of these methods is illustrated using the same copper toxicity database noted above. The use of these methods permits the analysis of all available exposure‐response data from multiple studies expressing multiple endpoints due to both excess and deficiency. The exposure level corresponding to the minimum of this U‐shaped curve, and the confidence limits around this exposure level, may be useful in establishing an acceptable range of exposures that minimize the overall risk associated with the agent of interest.  相似文献   
918.
This article examines the way advertising was rationalized in the early twentieth-century United States. Drawing on a targeted archival comparison with the United Kingdom, I show how the extensive mobilization undertaken to legitimate and rationalize advertising, rather than changes in the techniques employed in the content of ads themselves, were seen by actors in the mid-1920s to explain most of the extraordinary advances made by American advertising. Building on that comparison, I show how American advertising was transformed, particularly around World War I, into a legitimate profession situated at the center of a network of expertise about consumers and their media. Under the banner of “truth in advertising” ads came to be regarded as a legitimate, rational, and sustained business investment, leading to an enormous increase in aggregate expenditures. I argue that future research should examine how this process fuelled mass media and contributed to the conditions for modern consumerism.  相似文献   
919.
In Italy, homosexual people are not allowed to perform donor insemination/surrogacy or adoption, thus they become parents mainly in the context of previous heterosexual relationships. The current study examines the experiences of 34 gay fathers and 32 lesbian mothers with children from a heterosexual relationship. Data on homosexuality awareness, reasons for marriage and parenthood, and the coming-out process to children were collected. Most participants reported not being aware of their homosexuality when they married and became parents. The most common reasons for marriage were “love” and “social expectancy,” whereas parenthood was motivated mainly by the “desire for children and family.” Most participants came out to at least one child and reported a positive reaction. The most cited benefit of coming out was “openness/not hiding anymore.” The results suggest that the lives of gay and lesbian parents are shaped by their sexual minority status as well as by societal heterosexism.  相似文献   
920.
Schneider's Dynamic Model of Postcolonial English Development (2007) suggests that distinct local identities and their associated varieties of English emerge as a result of British colonization, and reach maturity only when ties to the colonial power are finally severed. While this developmental trajectory is well documented in many of the case studies discussed in, and since, Schneider (2007), a comparison of Hong Kong and Gibraltar shows, in certain cases, that association with Britain can be seen as the best guarantor of these local identities and varieties of English. The present article sketches this alternate developmental trajectory, and examines under what circumstances it may emerge and how widely it might be applied.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号