
BMJ Open
SCIE-ISI SCOPUS (2011-2023)
2044-6055
2044-6055
Anh Quốc
Cơ quản chủ quản: BMJ Publishing Group
Các bài báo tiêu biểu
Total hip or knee replacement is highly successful when judged by prosthesis-related outcomes. However, some people experience long-term pain.
To review published studies in representative populations with total hip or knee replacement for the treatment of osteoarthritis reporting proportions of people by pain intensity.
MEDLINE and EMBASE databases searched to January 2011 with no language restrictions. Citations of key articles in ISI Web of Science and reference lists were checked.
Prospective studies of consecutive, unselected osteoarthritis patients representative of the primary total hip or knee replacement population, with intensities of patient-centred pain measured after 3 months to 5-year follow-up.
Two authors screened titles and abstracts. Data extracted by one author were checked independently against original articles by a second. For each study, the authors summarised the proportions of people with different severities of pain in the operated joint.
Searches identified 1308 articles of which 115 reported patient-centred pain outcomes. Fourteen articles describing 17 cohorts (6 with hip and 11 with knee replacement) presented appropriate data on pain intensity. The proportion of people with an unfavourable long-term pain outcome in studies ranged from about 7% to 23% after hip and 10% to 34% after knee replacement. In the best quality studies, an unfavourable pain outcome was reported in 9% or more of patients after hip and about 20% of patients after knee replacement.
Other studies reported mean values of pain outcomes. These and routine clinical studies are potential sources of relevant data.
After hip and knee replacement, a significant proportion of people have painful joints. There is an urgent need to improve general awareness of this possibility and to address determinants of good and bad outcomes.
To systematically examine the evidence of harms and benefits relating to time spent on screens for children and young people’s (CYP) health and well-being, to inform policy.
Systematic review of reviews undertaken to answer the question ‘What is the evidence for health and well-being effects of screentime in children and adolescents (CYP)?’ Electronic databases were searched for systematic reviews in February 2018. Eligible reviews reported associations between time on screens (screentime; any type) and any health/well-being outcome in CYP. Quality of reviews was assessed and strength of evidence across reviews evaluated.
13 reviews were identified (1 high quality, 9 medium and 3 low quality). 6 addressed body composition; 3 diet/energy intake; 7 mental health; 4 cardiovascular risk; 4 for fitness; 3 for sleep; 1 pain; 1 asthma. We found moderately strong evidence for associations between screentime and greater obesity/adiposity and higher depressive symptoms; moderate evidence for an association between screentime and higher energy intake, less healthy diet quality and poorer quality of life. There was weak evidence for associations of screentime with behaviour problems, anxiety, hyperactivity and inattention, poorer self-esteem, poorer well-being and poorer psychosocial health, metabolic syndrome, poorer cardiorespiratory fitness, poorer cognitive development and lower educational attainments and poor sleep outcomes. There was no or insufficient evidence for an association of screentime with eating disorders or suicidal ideation, individual cardiovascular risk factors, asthma prevalence or pain. Evidence for threshold effects was weak. We found weak evidence that small amounts of daily screen use is not harmful and may have some benefits.
There is evidence that higher levels of screentime is associated with a variety of health harms for CYP, with evidence strongest for adiposity, unhealthy diet, depressive symptoms and quality of life. Evidence to guide policy on safe CYP screentime exposure is limited.
CRD42018089483.
To date, delirium prevalence and incidence in acute hospitals has been estimated from pooled findings of studies performed in distinct patient populations.
To determine delirium prevalence across an acute care facility.
A point prevalence study.
A large tertiary care, teaching hospital.
311 general hospital adult inpatients were assessed over a single day. Of those, 280 had full data collected within the study's time frame (90%).
Initial screening for inattention was performed using the spatial span forwards and months backwards tests by junior medical staff, followed by two independent formal delirium assessments: first the Confusion Assessment Method (CAM) by trained geriatric medicine consultants and registrars, and, subsequently, the Delirium Rating Scale-Revised-98 (DRS-R98) by experienced psychiatrists. The diagnosis of delirium was ultimately made using DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition) criteria.
Using DSM-IV criteria, 55 of 280 patients (19.6%) had delirium versus 17.6% using the CAM. Using the DRS-R98 total score for independent diagnosis, 20.7% had full delirium, and 8.6% had subsyndromal delirium. Prevalence was higher in older patients (4.7% if <50 years and 34.8% if >80 years) and particularly in those with prior dementia (OR=15.33, p<0.001), even when adjusted for potential confounders. Although 50.9% of delirious patients had pre-existing dementia, it was poorly documented in the medical notes. Delirium symptoms detected by medical notes, nurse interview and patient reports did not overlap much, with inattention noted by professional staff, and acute change and sleep-wake disturbance noted by patients.
Our point prevalence study confirms that delirium occurs in about 1/5 of general hospital inpatients and particularly in those with prior cognitive impairment. Recognition strategies may need to be tailored to the symptoms most noticed by the detector (patient, nurse or primary physician) if formal assessments are not available.
The last decade has seen the introduction of new technology which has transformed many aspects of our culture, commerce, communication and education. This study examined how medical teachers and learners are using mobile computing devices such as the iPhone in medical education and practice, and how they envision them being used in the future.
Semistructured interviews were conducted with medical students, residents and faculty to examine participants’ attitudes about the current and future use of mobile computing devices in medical education and practice. A thematic approach was used to summarise ideas and concepts expressed, and to develop an online survey. A mixed methods approach was used to integrate qualitative and quantitative findings.
Medical students, residents and faculty at a large Canadian medical school in 2011.
Interviews were conducted with 18 participants (10 students, 7 residents and 1 faculty member). Only 213 participants responded to the online survey (76 students, 65 residents and 41 faculty members). Over 85% of participants reported using a mobile-computing device. The main uses described for mobile devices related to information management, communication and time management. Advantages identified were portability, flexibility, access to multimedia and the ability to look up information quickly. Challenges identified included: superficial learning, not understanding how to find good learning resources, distraction, inappropriate use and concerns about access and privacy. Both medical students and physicians expressed the view that the use of these devices in medical education and practice will increase in the future.
This new technology offers the potential to enhance learning and patient care, but also has potential problems associated with its use. It is important for leadership in medical schools and healthcare organisations to set the agenda in this rapidly developing area to maximise the benefits of this powerful new technology while avoiding unintended consequences.
To quantify global intakes of key foods related to non-communicable diseases in adults by region (n=21), country (n=187), age and sex, in 1990 and 2010.
We searched and obtained individual-level intake data in 16 age/sex groups worldwide from 266 surveys across 113 countries. We combined these data with food balance sheets available in all nations and years. A hierarchical Bayesian model estimated mean food intake and associated uncertainty for each age-sex-country-year stratum, accounting for differences in intakes versus availability, survey methods and representativeness, and sampling and modelling uncertainty.
Global adult population, by age, sex, country and time.
In 2010, global fruit intake was 81.3 g/day (95% uncertainty interval 78.9–83.7), with country-specific intakes ranging from 19.2–325.1 g/day; in only 2 countries (representing 0.4% of the world's population), mean intakes met recommended targets of ≥300 g/day. Country-specific vegetable intake ranged from 34.6–493.1 g/day (global mean=208.8 g/day); corresponding values for nuts/seeds were 0.2–152.7 g/day (8.9 g/day); for whole grains, 1.3–334.3 g/day (38.4 g/day); for seafood, 6.0–87.6 g/day (27.9 g/day); for red meats, 3.0–124.2 g/day (41.8 g/day); and for processed meats, 2.5–66.1 g/day (13.7 g/day). Mean national intakes met recommended targets in countries representing 0.4% of the global population for vegetables (≥400 g/day); 9.6% for nuts/seeds (≥4 (28.35 g) servings/week); 7.6% for whole grains (≥2.5 (50 g) servings/day); 4.4% for seafood (≥3.5 (100 g) servings/week); 20.3% for red meats (≤1 (100 g) serving/week); and 38.5% for processed meats (≤1 (50 g) serving/week). Intakes of healthful foods were generally higher and of less healthful foods generally lower at older ages. Intakes were generally similar by sex. Vegetable, seafood and processed meat intakes were stable over time; fruits, nuts/seeds and red meat, increased; and whole grains, decreased.
These global dietary data by nation, age and sex identify key challenges and opportunities for optimising diets, informing policies and priorities for improving global health.
To measure test accuracy of non-invasive prenatal testing (NIPT) for Down, Edwards and Patau syndromes using cell-free fetal DNA and identify factors affecting accuracy.
Systematic review and meta-analysis of published studies.
PubMed, Ovid Medline, Ovid Embase and the Cochrane Library published from 1997 to 9 February 2015, followed by weekly autoalerts until 1 April 2015.
English language journal articles describing case–control studies with ≥15 trisomy cases or cohort studies with ≥50 pregnant women who had been given NIPT and a reference standard.
41, 37 and 30 studies of 2012 publications retrieved were included in the review for Down, Edwards and Patau syndromes. Quality appraisal identified high risk of bias in included studies, funnel plots showed evidence of publication bias. Pooled sensitivity was 99.3% (95% CI 98.9% to 99.6%) for Down, 97.4% (95.8% to 98.4%) for Edwards, and 97.4% (86.1% to 99.6%) for Patau syndrome. The pooled specificity was 99.9% (99.9% to 100%) for all three trisomies. In 100 000 pregnancies in the general obstetric population we would expect 417, 89 and 40 cases of Downs, Edwards and Patau syndromes to be detected by NIPT, with 94, 154 and 42 false positive results. Sensitivity was lower in twin than singleton pregnancies, reduced by 9% for Down, 28% for Edwards and 22% for Patau syndrome. Pooled sensitivity was also lower in the first trimester of pregnancy, in studies in the general obstetric population, and in cohort studies with consecutive enrolment.
NIPT using cell-free fetal DNA has very high sensitivity and specificity for Down syndrome, with slightly lower sensitivity for Edwards and Patau syndrome. However, it is not 100% accurate and should not be used as a final diagnosis for positive cases.
CRD42014014947.
Risk scores are recommended in guidelines to facilitate the management of patients who present with acute coronary syndromes (ACS). Internationally, such scores are not systematically used because they are not easy to apply and some risk indicators are not available at first presentation. We aimed to derive and externally validate a more accurate version of the Global Registry of Acute Coronary Events (GRACE) risk score for predicting the risk of death or death/myocardial infarction (MI) both acutely and over the longer term. The risk score was designed to be suitable for acute and emergency clinical settings and usable in electronic devices.
The GRACE risk score (2.0) was derived in 32 037 patients from the GRACE registry (14 countries, 94 hospitals) and validated externally in the French registry of Acute ST-elevation and non-ST-elevation MI (FAST-MI) 2005.
Patients presenting with ST-elevation and non-ST elevation ACS and with long-term outcomes.
The GRACE Score (2.0) predicts the risk of short-term and long-term mortality, and death/MI, overall and in hospital survivors.
For key independent risk predictors of death (1 year), non-linear associations (vs linear) were found for age (p<0.0005), systolic blood pressure (p<0.0001), pulse (p<0.0001) and creatinine (p<0.0001). By employing non-linear algorithms, there was improved model discrimination, validated externally. Using the FAST-MI 2005 cohort, the c indices for death exceeded 0.82 for the overall population at 1 year and also at 3 years. Discrimination for death or MI was slightly lower than for death alone (c=0.78). Similar results were obtained for hospital survivors, and with substitutions for creatinine and Killip class, the model performed nearly as well.
The updated GRACE risk score has better discrimination and is easier to use than the previous score based on linear associations. GRACE Risk (2.0) performed equally well acutely and over the longer term and can be used in a variety of clinical settings to aid management decisions.
Antenatal care (ANC) is an essential part of primary healthcare and its provision has expanded worldwide. There is limited evidence of large-scale cross-country studies on the impact of ANC offered to pregnant women on child health outcomes. We investigate the association of ANC in low-income and middle-income countries with short- and long-term mortality and nutritional child outcomes.
We used nationally representative health and welfare data from 193 Demographic and Health Surveys conducted between 1990 and 2013 from 69 low-income and middle-income countries for women of reproductive age (15–49 years), their children and their respective household.
The analytical sample consisted of 752 635 observations for neonatal mortality, 574 675 observations for infant mortality, 400 426 observations for low birth weight, 501 484 observations for stunting and 512 424 observations for underweight.
Outcome variables are neonatal and infant mortality, low birth weight, stunting and underweight.
At least one ANC visit was associated with a 1.04% points reduced probability of neonatal mortality and a 1.07% points lower probability of infant mortality. Having at least four ANC visits and having at least once seen a skilled provider reduced the probability by an additional 0.56% and 0.42% points, respectively. At least one ANC visit is associated with a 3.82% points reduced probability of giving birth to a low birth weight baby and a 4.11 and 3.26% points reduced stunting and underweight probability. Having at least four ANC visits and at least once seen a skilled provider reduced the probability by an additional 2.83%, 1.41% and 1.90% points, respectively.
The currently existing and accessed ANC services in low-income and middle-income countries are directly associated with improved birth outcomes and longer-term reductions of child mortality and malnourishment.
We validate a machine learning-based sepsis-prediction algorithm (
A machine-learning algorithm with gradient tree boosting. Features for prediction were created from combinations of six vital sign measurements and their changes over time.
A mixed-ward retrospective dataset from the University of California, San Francisco (UCSF) Medical Center (San Francisco, California, USA) as the primary source, an intensive care unit dataset from the Beth Israel Deaconess Medical Center (Boston, Massachusetts, USA) as a transfer-learning source and four additional institutions’ datasets to evaluate generalisability.
684 443 total encounters, with 90 353 encounters from June 2011 to March 2016 at UCSF.
None.
Area under the receiver operating characteristic (AUROC) curve for detection and prediction of sepsis, severe sepsis and septic shock.
For detection of sepsis and severe sepsis,
Considerable interest and controversy over a possible decline in semen quality during the 20th century raised concern that semen quality could have reached a critically low level where it might affect human reproduction. The authors therefore initiated a study to assess reproductive health in men from the general population and to monitor changes in semen quality over time.
Cross-sectional study of men from the general Danish population. Inclusion criteria were place of residence in the Copenhagen area, and both the man and his mother being born and raised in Denmark. Men with severe or chronic diseases were not included.
Danish one-centre study.
4867 men, median age 19 years, included from 1996 to 2010.
Semen volume, sperm concentration, total sperm count, sperm motility and sperm morphology.
Only 23% of participants had optimal sperm concentration and sperm morphology. Comparing with historic data of men attending a Copenhagen infertility clinic in the 1940s and men who recently became fathers, these two groups had significantly better semen quality than our study group from the general population. Over the 15 years, median sperm concentration increased from 43 to 48 million/ml (p=0.02) and total sperm count from 132 to 151 million (p=0.001). The median percentage of motile spermatozoa and abnormal spermatozoa were 68% and 93%, and did not change during the study period.
This large prospective study of semen quality among young men of the general population showed an increasing trend in sperm concentration and total sperm count. However, only one in four men had optimal semen quality. In addition, one in four will most likely face a prolonged waiting time to pregnancy if they in the future want to father a child and another 15% are at risk of the need of fertility treatment. Thus, reduced semen quality seems so frequent that it may impair the fertility rates and further increase the demand for assisted reproduction.